Duchamp Wanted

PREDICTIVE JUSTICE

How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

“Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.”

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. ht Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

“Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness […] In CS we’re used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that’s ‘better’ than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL…) captures something about our fairness intuitions.”

Link to Narayanan’s thread.

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

  • A paper from Narayanan and others: “Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases…Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names.” Link.
  • A critique of this ProPublica story on the COMPAS algorithm used for criminal sentencing: the issue with the paper comes down to competing definitions of fairness. “In this paper we show that the differences in false positive and false negative rates cited as evidence of racial bias by Angwin et al. are a direct consequence of applying an RPI that that satisfies predictive parity to a population in which recidivism prevalence differs across groups.” Link.
  • On the AI Now Initiative: “A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren’t transparent about how they operate. O’Neil says, for example, she is concerned about how the algorithms behind Google’s new job search service work.” Link.

FACTORY TOWN, COLLEGE TOWN

Should we build more universities to encourage local, rural economic growth?

Lyman Stone analyzes the economic role of the US higher education system:

“The problems we face are: (1) the regional returns to higher education are too localized, (2) the price of higher education is bid up very high, (3) the traditional entrepreneurial player, state governments, is financially strained or unwilling, (4) private entrance is systematically suppressed by unavoidable market features.”

Link to Stone’s piece on Medium here. ht Michael

See also this 2011 paper from the New York Fed, “The Role of Colleges and Universities in Building Local Human Capital.”

“College and universities can contribute to the economic success of a region by deepening the skills and knowledge–or human capital–of its residents. Producing graduates who join the region’s educated workforce is one way these institutions increase human capital levels. In addition, the knowledge and technologies created through research activities at area universities may not only attract new firms to a region but also help existing businesses expand and innovate. These ‘spillover effects’ can in turn raise the region’s demand for high-skilled workers.”

Full paper by JAISON R. AIBEL and RICHARD DEITZ here.

  • Lyman Stone’s article, above, is part of a conversation that Ross Douthat started in a March column: “…There’s only so much that breaking up D.C. will do. Which is why we’ll go further, starting with the deep-pocketed elite universities clustered around our bloated megalopolises. We’ll tax their endowments heavily, but offer exemptions for schools that expand their student bodies with satellite campuses in areas with well-below-the-median average incomes. M.IT.-in-Flint has a certain ring to it. So does Stanford-Buffalo, or Harvard-on-the-Mississippi.” Link.
  • Douthat’s article prompted a response from Noah Smith: “Better universities, endowed with more government research dollars, and providing expensive education services to rich foreigners, will help revive struggling places and lessen the dominance of big cities. It’s just the regional industrial policy the country needs right now.” Link.
 education

+++

  • An engaging overview, with many examples, of feature visualization by neural network. Link. ht Margarita
  • “Here at Stitch Fix, we work on many fun and interesting areas of Data Science. One of the more unusual ones is drawing maps, specifically internal layouts of warehouses. These maps are extremely useful for simulating and optimizing operational processes. In this post, we’ll explore how we are combining ideas from recommender systems and structural biology to automatically draw layouts and track when they change.” Link. ht Jay
  • Elizabeth Kolbert writes in the New Yorker about carbon capture—removing CO2 emissions from the air. “‘People told me, “The models show this major need for negative emissions,” he recalled. “But we don’t really know how to do that, nor is anyone really thinking about it.” I was someone who’d been in the business and policy world, and I was, like, wait a minute—what?’” Link.
  • Climate change divides Americans, but in an unlikely way: The more education that Democrats and Republicans have, the more their beliefs in climate change diverge.” Link. ht Jay
  • Dealing with low participation in a study: “In a new working paper (joint with Gabriel Lara Ibarra), we discuss how the richness of financial data on clients allows us to combine experimental and non-experimental methods to still estimate the impact of this program for those clients who do take up the program.” Link. ht Sidhya
  • An ambitious knowledge-mapping interactive project from the World Economic Forum. (You have to sign up with an email address, but it’s free.) Link. ht Leeza
  • Unconditional cash from a casino in a North Carolina Cherokee community: “McCoy’s kids, and all children in the community, have been accruing payments since the day they were born. The tribe sets the money aside and invests it, so the children cash out a substantial nest egg when they’re 18.” Link. ht Will
  • GiveDirectly has launched their UBI test in Kenya. A short blog post from Give Directly; a Business Insider piece on the experiment.ht Sidhya
  • “Can attitudes towards minorities, an important cultural trait, be changed? We show that the presence of African American soldiers in the UK during World War II reduced anti-minority prejudice, a result of the positive interactions which took place between soldiers and the local population. The change has been persistent: in locations in which more African American soldiers were posted there are fewer members of the UK’s leading far-right party, less implicit bias against blacks and fewer individuals professing racial prejudice, all measured around 2010.” Link. ht Will
  • A paper titled “Climate Risk, Cooperation, and the Co-Evolution of Culture and Institutions.” Link. ht Whyvert
  • ECM released its catalogue on streaming services. Richard Brody comments. Link. ht Michael

Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: editorial@jainfamilyinstitute.org

Subscribe to Phenomenal World Sources, a weekly digest of recommended readings across the social sciences. See the full Sources archive.