↳ Digital+ethics

October 16th, 2020

↳ Digital+ethics

Data as Property?

The legal structure of data egalitarianism

Since the proliferation of the World Wide Web in the 1990s, critics of widely used internet communications services have warned of the misuse of personal data. Alongside familiar concerns regarding user privacy and state surveillance, a now-decades-long thread connects a group of theorists who view data—and in particular data about people—as central to what they have termed informational capitalism. Critics locate in datafication—the transformation of information into commodity—a particular economic process of value creation that demarcates informational capitalism from its predecessors. Whether these critics take “information” or “capitalism” as the modifier warranting primary concern, datafication, in their analysis, serves a dual role: both a process of production and a form of injustice.

In arguments levied against informational capitalism, the creation, collection, and use of data feature prominently as an unjust way to order productive activity. For instance, in her 2019 blockbuster The Age of Surveillance Capitalism, Shoshanna Zuboff likens our inner lives to a pre-Colonial continent, invaded and strip-mined of data by technology companies seeking profits. Elsewhere, Jathan Sadowski identifies data as a distinct form of capital, and accordingly links the imperative to collect data to the perpetual cycle of capital accumulation. Julie Cohen, in the Polanyian tradition, traces the “quasi-ownership through enclosure” of data and identifies the processing of personal information in “data refineries” as a fourth factor of production under informational capitalism.

Critiques breed proposals for reform. Thus, data governance emerges as key terrain on which to discipline firms engaged in datafication and to respond to the injustices of informational capitalism. Scholars, activists, technologists and even presidential candidates have all proposed data governance reforms to address the social ills generated by the technology industry.

⤷ Full Article

July 27th, 2020

Essential Infrastructures

The case for sovereign investment in telecommunications infrastructure

As social distancing became norm and law in the early days of the Covid-19 pandemic, people turned to video teleconferencing to meet with friends and family, attend religious services, and go on dates. Zoom work accounts became a conduit for maintaining nonwork social ties and, as people came to depend on this enterprise tool, Zoom's stock valuation soared. The pandemic has widened the sphere of life dependent on such market technologies, heightening existing questions around the political, legal, and economic governance of these companies. How should the fabric of social life, especially as it is rewoven by the pandemic, relate to the private ownership of telecommunications?

Two legal regimes regulate the ownership of and access to telecommunications technology: the market disciplining forces of antitrust law (along with allied concepts like public utilities regulation), and the national security protections of critical infrastructure regulation. Certain applications of the former, concerned primarily with market power, identify privately-owned infrastructures that are “essential,” and regulate firms to ensure that access to that infrastructure is made available to competitors and consumers on reasonable terms. The latter, on the other hand, identifies infrastructures that are “critical,” and regulates them to serve the US’s national and economic security interests.

⤷ Full Article

May 4th, 2020

Security for the People

ADVANCE CAUSE

Ethics in mitigation

Following the comparative success of South Korea and Singapore to flatten the Covid-19 curve, governments around the world have been discussing the merits and feasibility of tech-aided contact tracing systems. (Whether these comparative public health successes are actually attributable to such systems remains a point of debate.) In the US context, app-based tracing proposals have been floated by various think tanks, and Apple and Google have released protocols for their design.

Privacy concerns are paramount, as are questions of efficacy and the opportunity costs of new mitigation tools. In a white paper last month, Danielle Allen, Lucas Stanczyk, Glenn Cohen, Carmel Shachar, Rajiv Sethi, Glen Weyl, and Rosa Brooks examined the ethical and legal bases of pandemic mitigation.

From the paper:

"We are currently in the initial stage of facing the spread of an epidemic, with clear emergency needs to secure our health system while seeking to minimize lives lost and ensure that all patients, including the dying, are treated with dignity. We have to fend off a near-term catastrophe, and in that regard we are in our 'triage' moment. We are currently making triage decisions across all sectors of society.

Securing our health infrastructure and minimizing loss of life requires changing the trajectory of transmission through screening, testing, contact tracing, mobility restrictions, and social distancing. Whereas contact tracing and individualized quarantine and isolation suffice in non-pandemic circumstances, community quarantine and isolation become necessary under pandemic conditions in order to address the emergency. Here the challenging questions are to create the right package of temporarily adjusted norms, regulations, and laws around rights of mobility and association, and to determine whether the relevant packages of norms, regulations, and laws are best."

The authors propose guidelines for decision procedures that promote mitigation without violating civil liberties, justice, democratic institutions, or the "material supports of society." Link to the paper. h/t David Grant

  • An evolving list of projects using personal data for Covid-19 response. Link.
  • From a 2019 paper on the efficacy of contact tracing and epi models: "A major concern identified in future epidemics is whether public health administrators can collect all the required data for building epidemiological models in a short period of time during the early phase of an outbreak." Link. A 2018 paper on contact tracing's role in the 2014-2015 Ebola outbreak in Liberia. Link.
  • Previously shared in this newsletter, a technical paper for the Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol. The tweet-length summary from researcher Michael Veale: "Health authorities learn nothing about users. Users learn nothing about other users. Users learn if they were too close to others who tested positive. Governments learn nothing about users. No-one is coerced: everything based on genuine, voluntary consent." Link to the paper. (And link to a comic strip explanation of how it works.)
  • An excellent blog post from Ross Anderson at Cambridge's Department of Computer Science and Technology on contact tracing in the real world. Link. See also "Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic." Link.
⤷ Full Article

January 30th, 2020

The Long History of Algorithmic Fairness

Fair algorithms from the seventeenth century to the present

As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.”

But general aspirations to fair algorithms have a long history. In these notes, I recount some past attempts to answer questions of fairness through the use of algorithms. My purpose is not to be exhaustive or completist, but instead to suggest some major transformations in those attempts, pointing along the way to scholarship that has informed my account.

⤷ Full Article

January 29th, 2020

Historicizing the Self-Evident

An interview with Lorraine Daston

Lorraine Daston has published widely in the history of science, including on probability and statistics, scientific objectivity and observation, game theory, monsters, and much else. Director at the Max Planck Institute for the History of Science since 1995 (emeritus as of Spring 2019), she is the author and co-author of over a dozen books, each stunning in scope and detail, and each demonstrating of her ability to make the common uncommon—to illuminate what she calls the “history of the self-evident.”

Amidst the ever expanding reach of all varieties of quantification and algorithmic formalization, both the earliest of Daston's works (the 1988 book Classical Probability in the Enlightenment) and her most recent (an ongoing project on the history of rules) perform this task, uncovering the contingencies that swirled around the invention of mathematical probability, and the rise of algorithmic rule-making.

We spoke over the phone to discuss the labor of calculation, the various emergences of formal rationality, and the importance of interdisciplinarity in the social sciences. Our conversation was edited for length and clarity.

⤷ Full Article

October 17th, 2019

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article

September 26th, 2019

Optimizing the Crisis

An interview with Seda Gürses and Bekah Overdorf

Software that structures increasingly detailed aspects of contemporary life is built for optimization. These programs require a mapping of the world in a way that is computationally legible, and translating the messy world into one that makes sense to a computer is imperfect. Even in the most ideal conditions, optimization systems—constrained, more often than not, by the imperatives of profit-generating corporations—are designed to ruthlessly maximize one metric at the expense of others. When these systems are optimizing over large populations of people, some people lose out in the calculation.

Official channels for redress offer little help: alleviating out-group concerns is by necessity counter to the interests of the optimization system and its target customers. Like someone who lives in a flight path but has never bought a plane ticket complaining about the noise to an airline company, the collateral damage of optimization has little leverage over the system provider unless the law can be wielded against it. Beyond the time-intensive and uncertain path of traditional advocacy, what recourse is available for those who find themselves in the path of optimization?

In their 2018 paper POTs: Protective Optimization Technologies (updated version soon forthcoming at this same link), authors Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses offer some answers. Eschewing the dominant frameworks used to analyze and critique digital optimization systems, the authors offer an analysis that illuminates fundamental problems with both optimization systems and the proliferating literature that attempts to solve them.

POTs—the analytical framework and the technology—suggest that the inevitable assumptions, flaws, and rote nature of optimization systems can be exploited to produce “solutions that enable optimization subjects to defend from unwanted consequences.” Despite their overbearing nature, optimization systems typically require some degree of user input; POTs uses this as a wedge for individuals and groups marginalized by the optimization system to influence its operation. In so doing, POTs find a way to restore what optimization seeks to hide, revealing that what gets laundered as technical problems are actually political ones.

Below, we speak with Seda Gürses and Bekah Overdorf, two members of the POTs team, who discuss the definition of optimization system, the departures the POTs approach makes from the digital ethics literature, and the design and implementation of POTs in the wild.

⤷ Full Article

August 23rd, 2019

Is it impossible to be fair?

Statistical prediction is increasingly pervasive in our lives. Can it be fair?

The Allegheny Family Screening Tool is a computer program that predicts whether a child will later have to be placed into foster care. It's been used in Allegheny County, Pennsylvania, since August 2016. When a child is referred to the county as at risk of abuse or neglect, the program analyzes administrative records and then outputs a score from 1 to 20, where a higher score represents a higher risk that the child will later have to be placed into foster care. Child welfare workers use the score to help them decide whether to investigate a case further.

Travel search engines like Kayak or Google Flights predict whether a flight will go up or down in price. Farecast, which launched in 2004 and was acquired by Microsoft a few years later, was the first to offer such a service. When you look up a flight, these search engines analyze price records and then predict whether the flight's price will go up or down over some time interval, perhaps along with a measure of confidence in the prediction. People use the predictions to help them decide when to buy a ticket.

⤷ Full Article

August 5th, 2019

Where is the Artist?

COMPETING VALUES

The state of a new pedagogical field

Technology companies are coming under increased scrutiny for the ethical consequences of their work, and some have formed advisory boards or hired ethicists on staff. (Google's AI ethics board quickly disintegrated.) Another approach is to train computer scientists in ethics before they enter the labor market. But how should that training—which must combine practice and theory across disciplines—be structured, who should teach the courses, and what should they teach?

This month’s cover story of the Communications of the Association for Computing Machinery describes the Embedded EthiCS program at Harvard. (David Gray Grant, a JFI fellow since 2018, and Lily Hu, a new JFI fellow, are co-authors, along with Barbara J. Grosz, Kate Vredenburgh, Jeff Behrends, Alison Simmons, and Jim Waldo.) The article explains the advantages of their approach, wherein philosophy PhD students and postdocs teach modules in computer science classes:

"In contrast to stand-alone computer ethics or computer-and-society courses, Embedded EthiCS employs a distributed pedagogy that makes ethical reasoning an integral component of courses throughout the standard computer science curriculum. It modifies existing courses rather than requiring wholly new courses. Students learn ways to identify ethical implications of technology and to reason clearly about them while they are learning ways to develop and implement algorithms, design interactive systems, and code. Embedded EthiCS thus addresses shortcomings of stand-alone courses. Furthermore, it compensates for the reluctance of STEM faculty to teach ethics on their own by embedding philosophy graduate students and postdoctoral fellows into the teaching of computer science courses."

A future research direction is to examine "the approach's impact over the course of years, for instance, as students complete their degrees and even later in their careers."

Link to the full article.

  • Shannon Vallor and Arvind Narayanan have a free ethics module anyone can use in a CS course. View it here. A Stephanie Wykstra piece in the Nation on the state of DE pedagogy notes that the module has been used at 100+ universities. Link.
  • In February 2018, we wrote about Casey Fiesler’s spreadsheet of tech ethics curricula, which has gotten even more comprehensive, including sample codes of ethics and other resources. Jay Hodges’s comment is still relevant for many of the curricula: "Virtually every discipline that deals with the social world – including, among others, sociology, social work, history, women’s studies, Africana studies, Latino/a studies, urban studies, political science, economics, epidemiology, public policy, and law – addresses questions of fairness and justice in some way. Yet the knowledge accumulated by these fields gets very little attention in these syllabi." Link to that 2018 letter.
  • At MIT, JFI fellow Abby Everett Jacques teaches "Ethics of Technology." An NPR piece gives a sense of the students' experiences. Link.
⤷ Full Article