↳ Fairness

January 30th, 2020

↳ Fairness

The Long History of Algorithmic Fairness

Fair algorithms from the seventeenth century to the present

As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.”

But general aspirations to fair algorithms have a long history. In these notes, I recount some past attempts to answer questions of fairness through the use of algorithms. My purpose is not to be exhaustive or completist, but instead to suggest some major transformations in those attempts, pointing along the way to scholarship that has informed my account.

⤷ Full Article

October 17th, 2019

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article

August 5th, 2019

Where is the Artist?

COMPETING VALUES

The state of a new pedagogical field

Technology companies are coming under increased scrutiny for the ethical consequences of their work, and some have formed advisory boards or hired ethicists on staff. (Google's AI ethics board quickly disintegrated.) Another approach is to train computer scientists in ethics before they enter the labor market. But how should that training—which must combine practice and theory across disciplines—be structured, who should teach the courses, and what should they teach?

This month’s cover story of the Communications of the Association for Computing Machinery describes the Embedded EthiCS program at Harvard. (David Gray Grant, a JFI fellow since 2018, and Lily Hu, a new JFI fellow, are co-authors, along with Barbara J. Grosz, Kate Vredenburgh, Jeff Behrends, Alison Simmons, and Jim Waldo.) The article explains the advantages of their approach, wherein philosophy PhD students and postdocs teach modules in computer science classes:

"In contrast to stand-alone computer ethics or computer-and-society courses, Embedded EthiCS employs a distributed pedagogy that makes ethical reasoning an integral component of courses throughout the standard computer science curriculum. It modifies existing courses rather than requiring wholly new courses. Students learn ways to identify ethical implications of technology and to reason clearly about them while they are learning ways to develop and implement algorithms, design interactive systems, and code. Embedded EthiCS thus addresses shortcomings of stand-alone courses. Furthermore, it compensates for the reluctance of STEM faculty to teach ethics on their own by embedding philosophy graduate students and postdoctoral fellows into the teaching of computer science courses."

A future research direction is to examine "the approach's impact over the course of years, for instance, as students complete their degrees and even later in their careers."

Link to the full article.

  • Shannon Vallor and Arvind Narayanan have a free ethics module anyone can use in a CS course. View it here. A Stephanie Wykstra piece in the Nation on the state of DE pedagogy notes that the module has been used at 100+ universities. Link.
  • In February 2018, we wrote about Casey Fiesler’s spreadsheet of tech ethics curricula, which has gotten even more comprehensive, including sample codes of ethics and other resources. Jay Hodges’s comment is still relevant for many of the curricula: "Virtually every discipline that deals with the social world – including, among others, sociology, social work, history, women’s studies, Africana studies, Latino/a studies, urban studies, political science, economics, epidemiology, public policy, and law – addresses questions of fairness and justice in some way. Yet the knowledge accumulated by these fields gets very little attention in these syllabi." Link to that 2018 letter.
  • At MIT, JFI fellow Abby Everett Jacques teaches "Ethics of Technology." An NPR piece gives a sense of the students' experiences. Link.
⤷ Full Article

March 9th, 2019

Incomplete Squares

CONTEXT ALLOCATION

Expanding the frame for formalizing fairness

In the digital ethics literature, there's a consistent back-and-forth between attempts at designing algorithmic tools that promote fair outcomes in decision-making processes, and critiques that enumerate the limits of such attempts. A December paper by ANDREW SELBST, dana boyd, SORELLE FRIEDLER, SURESH VENKATASUBRAMANIAN, and JANET VERTESI—delivered at FAT* 2019—contributes to the latter genre. The authors build on insights from Science and Technology Studies and offer a list of five "traps"—Framing, Portability, Formalism, Ripple Effect, and Solutionism—that fair-ML work is susceptible to as it aims for context-aware systems design. From the paper:

"We contend that by abstracting away the social context in which these systems will be deployed, fair-ML researchers miss the broader context, including information necessary to create fairer outcomes, or even to understand fairness as a concept. Ultimately, this is because while performance metrics are properties of systems in total, technical systems are subsystems. Fairness and justice are properties of social and legal systems like employment and criminal justice, not properties of the technical tools within. To treat fairness and justice as terms that have meaningful application to technology separate from a social context is therefore to make a category error, or as we posit here, an abstraction error."

In their critique of what is left out in the formalization process, the authors argue that, by "moving decisions made by humans and human institutions within the abstraction boundary, fairness of the system can again be analyzed as an end-to-end property of the sociotechnical frame." Link to the paper.

  • A brand new paper by HODA HEIDARI, VEDANT NANDA, and KRISHNA GUMMADI attempts to produce fairness metrics that look beyond "allocative equality," and directly grapples with the above mentioned "ripple effect trap." The authors "propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the underlying population." Link.
  • In the footnotes to the paper by Selbst et al, a 1997 chapter by early AI researcher turned sociologist Phil Agre. In the chapter: institutional and intellectual history of early AI; sociological study of the AI field at the time; Agre’s departure from the field; discussions of developing "critical technical practice." Link.
⤷ Full Article

March 24th, 2018

Do I See Right?

FAIRNESS IN MACHINE LEARNING | METARESEARCH | MICROSTRUCTURE OF VIOLENCE

DISTINCT FUSION

Tracking the convergence of terms across disciplines

In a new paper, CHRISTIAN VINCENOT looks at the process by which two synonymous concepts developed independently in separate disciplines, and how they were brought together.

“I analyzed research citations between the two communities devoted to ACS research, namely agent-based (ABM) and individual-based modelling (IBM). Both terms refer to the same approach, yet the former is preferred in engineering and social sciences, while the latter prevails in natural sciences. This situation provided a unique case study for grasping how a new concept evolves distinctly across scientific domains and how to foster convergence into a universal scientific approach. The present analysis based on novel hetero-citation metrics revealed the historical development of ABM and IBM, confirmed their past disjointedness, and detected their progressive merger. The separation between these synonymous disciplines had silently opposed the free flow of knowledge among ACS practitioners and thereby hindered the transfer of methodological advances and the emergence of general systems theories. A surprisingly small number of key publications sparked the ongoing fusion between ABM and IBM research.”

Link to a summary and context. Link to the abstract. ht Margarita

  • Elsewhere in metaresearch, a new paper from James Evans’s Knowledge Lab examines influence by other means than citations: “Using a computational method known as topic modeling—invented by co-author David Blei of Columbia University—the model tracks ‘discursive influence,’ or recurring words and phrases through historical texts that measure how scholars actually talk about a field, instead of just their attributions. To determine a given paper’s influence, the researchers could statistically remove it from history and see how scientific discourse would have unfolded without its contribution.” Link to a summary. Link to the full paper.
⤷ Full Article