↳ Fairness

October 17th, 2019

↳ Fairness

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article