↳ Crime

February 2nd, 2019

↳ Crime

The Summit

LEGITIMATE ASSESSMENT

Moving beyond computational questions in digital ethics research

In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.

A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:

“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.

In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.

Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy"

⤷ Full Article

January 26th, 2019

Bone Mobile

LONG RUNNING

Learning about long-term effects of interventions, and designing interventions to facilitate the long view

A new paper from the Center for Effective Global Action at Berkeley surveys a topic important to our researchers here at JFI: the question of long-run effects of interventions. In our literature review of cash transfer studies, we identified the need for more work beyond the bounds of a short-term randomized controlled trial. This is especially crucial for basic income, which is among those policies intended to be permanent.

The authors of the new Berkeley report, Adrien Bouguen, Yue Huang, Michael Kremer, and Edward Miguel, note that it’s a particularly apt moment for this kind of work: “Given the large numbers of RCTs launched in the 2000’s, every year that goes by means that more and more RCT studies are ‘aging into’ a phase where the assessment of long-run impacts becomes possible.”

The report includes a summary of what we know about long-run impacts so far:

"Section 2 summarizes and evaluates the growing body of evidence from RCTs on the long-term impacts of international development interventions, and find most (though not all) provide evidence for positive and meaningful effects on individual economic productivity and living standards. Most of these studies examine existing cash transfer, child health, or education interventions, and shed light on important theoretical questions such as the existence of poverty traps (Bandiera et al., 2018) and returns to human capital investments in the long term."

Also notable is the last section, which contains considerations for study design, "lessons from our experience in conducting long-term tracking studies, as well as innovative data approaches." Link to the full paper.

  • In his paper "When are Cash Transfers Transformative?," Bruce Wydick also notes the need for long-run analysis: "Whether or not these positive impacts have long-term transformative effects—and under what conditions—is a question that is less settled and remains an active subject of research." The rest of the paper is of interest as well, including Wydick's five factors that tend to signal that a cash transfer will be transformative. Link.
  • For more on the rising popularity of RCTs, a 2016 paper by major RCT influencers Banerjee, Duflo, and Kremer quantifies that growth and discusses the impact of RCTs. Link. Here’s the PowerPoint version of that paper. David McKenzie at the World Bank responds to the paper, disputing some of its claims. Link.
⤷ Full Article