↳ Risk

February 2nd, 2019

↳ Risk

The Summit


Moving beyond computational questions in digital ethics research

In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.

A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:

“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.

In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.

Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy"

⤷ Full Article

July 28th, 2018



On the history of economists in central banks 

A recent paper by FRANÇOIS CLAVEAU and JÉRÉMIE DION applies quantitative methods to the historical study of central banks, demonstrating the transition of central banking from an "esoteric art" to a science, the growth of economics research within central banking institutions, and the corresponding rise in the dominance of central banks in the field of monetary economics. From the paper: 

"We study one type of organization, central banks, and its changing relationship with economic science. Our results point unambiguously toward a growing dominance of central banks in the specialized field of monetary economics. Central banks have swelling research armies, they publish a growing share of the articles in specialized scholarly journals, and these articles tend to have more impact today than the articles produced outside central banks."

Link to the paper, popup: yes which contains a vivid 1929 dialogue between Keynes and Sir Ernest Musgrave Harvey of the Bank of England, who asserts, "It is a dangerous thing to start giving reasons." 

h/t to the always-excellent Beatrice Cherrier popup: yes who highlighted this work in a brief thread popup: yes and included some visualizations, including this one showing the publishing rate of central banking researchers: 

  • Via both Cherrier and the paper, a brief Economist article on the crucial significance of the central banking conference in Jackson Hole, hosted by the Federal Reserve Bank of Kansas City: "Davos for central bankers." Link popup: yes. (And link popup: yes to an official history of the conference.) 
  • Another paper co-authored by Claveau looks at the history of specialties in economics, using quantitative methods to map the importance of sets of ideas through time. "Among our results, especially noteworthy are (1) the clear-cut existence of ten families of specialties, (2) the disappearance in the late 1970s of a specialty focused on general economic theory, (3) the dispersal of the econometrics-centered specialty in the early 1990s and the ensuing importance of specific econometric methods for the identity of many specialties since the 1990s, and (4) the low level of specialization of individual economists throughout the period in contrast to physicists as early as the late 1960s." Link popup: yes
⤷ Full Article

July 21st, 2018

High Noon


History of risk assessment, and some proposed alternate methods 

A 2002 paper by ERIC SILVER and LISA L. MILLER on actuarial risk assessment tools provides a history of statistical prediction in the criminal justice context, and issues cautions now central to the contemporary algorithmic fairness conversations:  

"Much as automobile insurance policies determine risk levels based on the shared characteristics of drivers of similar age, sex, and driving history, actuarial risk assessment tools for predicting violence or recidivism use aggregate data to estimate the likelihood that certain strata of the population will commit a violent or criminal act. 

To the extent that actuarial risk assessment helps reduce violence and recidivism, it does so not by altering offenders and the environments that produced them but by separating them from the perceived law-abiding populations. Actuarial risk assessment facilitates the development of policies that intervene in the lives of citizens with little or no narrative of purpose beyond incapacitation. The adoption of risk assessment tools may signal the abandonment of a centuries-long project of using rationality, science, and the state to improve upon the social and economic progress of individuals and society."

Link popup: yes to the paper.

A more recent paper presented at FAT* in 2018 and co-authored by CHELSEA BARABAS, KARTHIK DINAKAR, JOICHI ITO, MADARS VIRZA, and JONATHAN ZITTRAIN makes several arguments reminiscent of Silver and Miller's work. They argue in favor of causal inference framework for risk assessments aimed at working on the question "what interventions work":

"We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose.… Data-driven tools provide an immense opportunity for us to pursue goals of fair punishment and future crime prevention. But this requires us to move away from merely tacking on intervenable variables to risk covariates for predictive models, and towards the use of empirically-grounded tools to help understand and respond to the underlying drivers of crime, both individually and systemically."

Link popup: yes to the paper. 

  • In his 2007 book Against Prediction popup: yes, lawyer and theorist Bernard Harcourt provided detailed accounts and critiques of the use of actuarial methods throughout the criminal legal system. In place of prediction, Harcourt proposes a conceptual and practical alternative: randomization. From a 2005 paper on the same topic: "Instead of embracing the actuarial turn in criminal law, we should rather celebrate the virtues of the random: randomization, it turns out, is the only way to achieve a carceral population that reflects the offending population. As a form of random sampling, randomization in policing has significant positive value: it reinforces the central moral intuition in the criminal law that similarly situated individuals should have the same likelihood of being apprehended if they offend—regardless of race, ethnicity, gender or class." Link popup: yes to the paper. (And link popup: yes to another paper of Harcourt's in the Federal Sentencing Reporter, "Risk as a Proxy for Race.") 
  • A recent paper by Megan Stevenson assesses risk assessment tools: "Despite extensive and heated rhetoric, there is virtually no evidence on how use of this 'evidence-based' tool affects key outcomes such as incarceration rates, crime, or racial disparities. The research discussing what 'should' happen as a result of risk assessment is hypothetical and largely ignores the complexities of implementation. This Article is one of the first studies to document the impacts of risk assessment in practice." Link popup: yes
  • A compelling piece of esoterica cited in Harcourt's book: a doctoral thesis by Deborah Rachel Coen on the "probabilistic turn" in 19th century imperial Austria. Link popup: yes.
⤷ Full Article