↳ Workforce

June 23rd, 2018

↳ Workforce

Yielding Stone

VISIBLE CONSTRAINT

Including protected variables can make algorithmic decision-making more fair 

A recent paper co-authored by JON KLEINBERG, JENS LUDWIG, SENDHIL MULLAINATHAN, and ASHESH RAMBACHAN addresses algorithmic bias, countering the "large literature that tries to 'blind' the algorithm to race to avoid exacerbating existing unfairness in society":  

"This perspective about how to promote algorithmic fairness, while intuitive, is misleading and in fact may do more harm than good. We develop a simple conceptual framework that models how a social planner who cares about equity should form predictions from data that may have potential racial biases. Our primary result is exceedingly simple, yet often overlooked: a preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (such as setting a different threshold for different groups) but the estimated prediction function itself should not change. Absent legal constraints, one should include variables such as gender and race for fairness reasons.

Our argument collects together and builds on existing insights to contribute to how we should think about algorithmic fairness.… We empirically illustrate this point for the case of using predictions of college success to make admissions decisions. Using nationally representative data on college students, we underline how the inclusion of a protected variable—race in our application—not only improves predicted GPAs of admitted students (efficiency), but also can improve outcomes such as the fraction of admitted students who are black (equity).

Across a wide range of estimation approaches, objective functions, and definitions of fairness, the strategy of blinding the algorithm to race inadvertently detracts from fairness."

Read the full paper here popup: yes.

⤷ Full Article