↳ Networks

March 9th, 2019

↳ Networks

Incomplete Squares

CONTEXT ALLOCATION

Expanding the frame for formalizing fairness

In the digital ethics literature, there's a consistent back-and-forth between attempts at designing algorithmic tools that promote fair outcomes in decision-making processes, and critiques that enumerate the limits of such attempts. A December paper by ANDREW SELBST, dana boyd, SORELLE FRIEDLER, SURESH VENKATASUBRAMANIAN, and JANET VERTESI—delivered at FAT* 2019—contributes to the latter genre. The authors build on insights from Science and Technology Studies and offer a list of five "traps"—Framing, Portability, Formalism, Ripple Effect, and Solutionism—that fair-ML work is susceptible to as it aims for context-aware systems design. From the paper:

"We contend that by abstracting away the social context in which these systems will be deployed, fair-ML researchers miss the broader context, including information necessary to create fairer outcomes, or even to understand fairness as a concept. Ultimately, this is because while performance metrics are properties of systems in total, technical systems are subsystems. Fairness and justice are properties of social and legal systems like employment and criminal justice, not properties of the technical tools within. To treat fairness and justice as terms that have meaningful application to technology separate from a social context is therefore to make a category error, or as we posit here, an abstraction error."

In their critique of what is left out in the formalization process, the authors argue that, by "moving decisions made by humans and human institutions within the abstraction boundary, fairness of the system can again be analyzed as an end-to-end property of the sociotechnical frame." Link to the paper.

  • A brand new paper by HODA HEIDARI, VEDANT NANDA, and KRISHNA GUMMADI attempts to produce fairness metrics that look beyond "allocative equality," and directly grapples with the above mentioned "ripple effect trap." The authors "propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the underlying population." Link.
  • In the footnotes to the paper by Selbst et al, a 1997 chapter by early AI researcher turned sociologist Phil Agre. In the chapter: institutional and intellectual history of early AI; sociological study of the AI field at the time; Agre’s departure from the field; discussions of developing "critical technical practice." Link.
⤷ Full Article