↳ Digital+ethics

October 17th, 2019

↳ Digital+ethics

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article

September 26th, 2019

Counter-Optimizing the Crisis

An interview with Seda Gürses and Bekah Overdorf

Software that structures increasingly detailed aspects of contemporary life is built for optimization. These programs require a mapping of the world in a way that is computationally legible, and translating the messy world into one that makes sense to a computer is imperfect. Even in the most ideal conditions, optimization systems—constrained, more often than not, by the imperatives of profit-generating corporations—are designed to ruthlessly maximize one metric at the expense of others. When these systems are optimizing over large populations of people, some people lose out in the calculation.

Official channels for redress offer little help: alleviating out-group concerns is by necessity counter to the interests of the optimization system and its target customers. Like someone who lives in a flight path but has never bought a plane ticket complaining about the noise to an airline company, the collateral damage of optimization has little leverage over the system provider unless the law can be wielded against it. Beyond the time-intensive and uncertain path of traditional advocacy, what recourse is available for those who find themselves in the path of optimization?

In their 2018 paper POTs: Protective Optimization Technologies (updated version soon forthcoming at this same link), authors Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses offer some answers. Eschewing the dominant frameworks used to analyze and critique digital optimization systems, the authors offer an analysis that illuminates fundamental problems with both optimization systems and the proliferating literature that attempts to solve them.

POTs—the analytical framework and the technology—suggest that the inevitable assumptions, flaws, and rote nature of optimization systems can be exploited to produce “solutions that enable optimization subjects to defend from unwanted consequences.” Despite their overbearing nature, optimization systems typically require some degree of user input; POTs uses this as a wedge for individuals and groups marginalized by the optimization system to influence its operation. In so doing, POTs find a way to restore what optimization seeks to hide, revealing that what gets laundered as technical problems are actually political ones.

Below, we speak with Seda Gürses and Bekah Overdorf, two members of the POTs team, who discuss the definition of optimization system, the departures the POTs approach makes from the digital ethics literature, and the design and implementation of POTs in the wild.

⤷ Full Article

August 23rd, 2019

Is it impossible to be fair?

Statistical prediction is increasingly pervasive in our lives. Can it be fair?

The Allegheny Family Screening Tool is a computer program that predicts whether a child will later have to be placed into foster care. It's been used in Allegheny County, Pennsylvania, since August 2016. When a child is referred to the county as at risk of abuse or neglect, the program analyzes administrative records and then outputs a score from 1 to 20, where a higher score represents a higher risk that the child will later have to be placed into foster care. Child welfare workers use the score to help them decide whether to investigate a case further.

Travel search engines like Kayak or Google Flights predict whether a flight will go up or down in price. Farecast, which launched in 2004 and was acquired by Microsoft a few years later, was the first to offer such a service. When you look up a flight, these search engines analyze price records and then predict whether the flight's price will go up or down over some time interval, perhaps along with a measure of confidence in the prediction. People use the predictions to help them decide when to buy a ticket.

⤷ Full Article

August 1st, 2019

Decentralize What?

Can you fix political problems with new web infrastructures?

The internet's early proliferation was steeped in cyber-utopian ideals. The circumvention of censorship and gatekeeping, digital public squares, direct democracy, revitalized civic engagement, the “global village”—these were all anticipated characteristics of the internet age, premised on the notion that digital communication would provide the necessary conditions for the world to change. In a dramatic reversal, we now associate the internet era with eroding privacy, widespread surveillance, state censorship, asymmetries of influence, and monopolies of attention—exacerbations of the exact problems it portended to fix.

Such problems are frequently understood as being problems of centralization—both infrastructural and political. If mass surveillance and censorship are problems of combined infrastructural and political centralization, then decentralization looks like a natural remedy. In the context of the internet, decentralization generally refers to peer-to-peer (p2p) technologies. In this post, I consider whether infrastructural decentralization is an effective way to counter existing regimes of political centralization. The cyber-utopian dream failed to account for the exogenous pressures that would shape the internet—the rosy narrative of infrastructural decentralization seems to be making a similar misstep.

⤷ Full Article

July 3rd, 2019

The Politics of Machine Learning, pt. II

The uses of algorithms discussed in the first part of this article vary widely: from hiring decisions to bail assignment, to political campaigns and military intelligence.

Across all these applications of machine learning methods, there is a common thread: Data on individuals is used to treat different individuals differently. In the past, broadly speaking, such commercial and government activities used to target everyone in a given population more or less similarly—the same advertisements, the same prices, the same political slogans. More and more now, everyone gets personalized advertisements, personalized prices, and personalized political messages. New inequalities are created and new fragmentations of discourse are introduced.

Is that a problem? Well, it depends. I will discuss two types of concerns. The first type, relevant in particular to news and political messaging, is that the differentiation of messages is by itself a source of problems.

⤷ Full Article

June 27th, 2019

The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

⤷ Full Article

May 31st, 2019

Copyright Humanism

It's by now common wisdom that American copyright law is burdensome, excessive, and failing to promote the ideals that protection ought to. Too many things, critics argue, are subject to copyright protections, and the result is an inefficient legal morass that serves few benefits to society and has failed to keep up with the radical transformations in technology and culture of the last several decades. To reform and streamline our copyright system, the thinking goes, we need to get rid of our free-for-all regime of copyrightability and institute reasonable barriers to protection.

But what if these commentators are missing the forest for the trees, and America's frequently derided copyright regime is actually particularly well-suited to the digital age? Could copyright protections—applied universally at the moment of authorship—provide a level of autonomy that matches the democratization of authorship augured by the digital age?

⤷ Full Article

October 18th, 2018

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article