August 26th, 2019

Summer in Brabant


On the pressures of policy-relevant climate science

Without any “evidence of fraud, malfeasance or deliberate deception or manipulation,” or any promotion of inaccurate views, how can bias enter a scientific assessment? In their new book, Discerning Experts, Michael Oppenheimer, Naomi Oreskes, Dale Jamieson, et al explore the pattern of underestimation of the true consequences of climate change.

Climate change's impacts are uncertain; predictions about climate change are difficult to make. Taking an ethnographic approach, Discerning Experts shows how those difficulties, coupled with the nature of the public discourse, and the pressures that come when research is going to be discussed and used in policy, have tilted climate assessment optimistic and cautious.

In a summary of their book, Oreskes et al explain three reasons for the tilt:

“The combination of … three factors—the push for univocality, the belief that conservatism is socially and politically protective, and the reluctance to make estimates at all when the available data are contradictory—can lead to ‘least common denominator' results—minimalist conclusions that are weak or incomplete.”

These tendencies, according to the authors, pertain to the applied research context. The academic context is different: “The reward structure of academic life leans toward criticism and dissent; the demands of assessment push toward agreement.” Link to a summary essay in Scientific American. Link to the book.

  • In an interview, Michael Oppenheimer elaborates on other elements that skew the assessments: the selection of authors, the presentation of the resulting information, and others. Link.
  • In a review of the book, Gary Yohe reflects on his own experience working on major climate assessments, such the IPCC’s. Link.
  • A David Roberts post from 2018 finds another case of overly cautious climate science: models of the economic effects of climate change may be much more moderate than models of the physical effects. To remedy this, “We need models that negatively weigh uncertainty, properly account for tipping points, incorporate more robust and current technology cost data, better differentiate sectors outside electricity, rigorously price energy efficiency, and include the social and health benefits of decarbonization.” Link.
  • Tangentially related: carbon tax or green investment? It’s worth considering not just all possible policy options but also their optimal interactions. A paper by Julie Rozenberg, Adrien Vogt-Schilb, and Stephane Hallegatte concludes, “Optimal carbon price minimizes the discounted social cost of the transition to clean capital, but imposes immediate private costs that disproportionately affect the current owners of polluting capital, in particular in the form of stranded assets.” Link to a summary which contains a link to the unpaywalled paper.
 Full Article

February 2nd, 2019

The Summit


Moving beyond computational questions in digital ethics research

In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.

A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:

“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.

In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.

Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy"

 Full Article