↳ Digital Ethics

November 10th, 2018

↳ Digital Ethics

Two Figures

NEW UBI REPORTS | ELECTORAL VIOLENCE | BEYOND GDP

DISCRETION DIFFERENTIAL

On the varying modes of conceiving of privacy (and its violation) in the law

In a 2004 YALE LAW JOURNAL article, comparative legal scholar JAMES Q. WHITMAN explores differing cultural and legal postures toward privacy. Through his comparison, he draws a slim taxonomy: privacy rights are founded on either dignity (throughout Western Europe) or on liberty (in the United States). The distinction—while far from perfectly neat either historically or in the present—raises a number of interesting questions about privacy law that are currently being worked out as scholars and legislators move forward in the creation and implementation of digital governance procedures. From the paper:

"If privacy is a universal human need that gives rise to a fundamental human right, why does it take such disconcertingly diverse forms? This is a hard problem for privacy advocates who want to talk about the values of ‘personhood,’ harder than they typically acknowledge. It is a hard problem because of the way they usually try to make their case: Overwhelmingly, privacy advocates rely on what moral philosophers call ‘intuitionist’ arguments. In their crude form, these sorts of arguments suppose that human beings have a direct, intuitive grasp of right and wrong—an intuitive grasp that can guide us in our ordinary ethical decisionmaking. The typical privacy article rests its case precisely on an appeal to its reader’s intuitions and anxieties about the evils of privacy violations. Imagine invasions of your privacy, the argument runs. Do they not seem like violations of your very personhood?

Continental privacy protections are, at their very core, a form of protection of a right to respect andpersonal dignity. The core continental privacy rights are rights to one’s image, name, and reputation, and what Germans call the right to informational self-determination—the right to control the sorts of information disclosed about oneself. They are all rights to control your public image.

By contrast, America is much more oriented to values of liberty. At its conceptual core, the American right to privacy is the right to freedom of intrusions by the state, especially in one’s own home."

Link to the paper.

  • Forthcoming in the Harvard Journal of Law & Technology, an in-depth review of the significance of the Supreme Court's June decision in Carpenter v. United States: "Carpenter holds that the police may not collect historical [cellphone location tracking data] from a cellphone provider without a warrant. This is the opinion most privacy law scholars and privacy advocates have been awaiting for decades." Link.
  • An excellent repository of scholarship on the GDPR—the new European data protection law—from the journal International Data Privacy Law. Link.
  • Danielle Citron and Daniel Solove's 2016 paper explores how US courts have dealt with legal standards of harm—anxiety or risk—in cases of personal data breaches. Link. See also Ryan Calo's 2010 article "The Boundaries of Privacy Harm." Link.
  • Khiara Bridges' 2017 book The Poverty of Privacy Rights provides a corrective to universalist claims to a right to privacy: "Poor mothers actually do not possess privacy rights. This is the book’s strong claim." Link to the book page, link to the introductory chapter.
⤷ Full Article

October 27th, 2018

The Seasons

LAW, ECON, IDEOLOGY | NEW JFI PUBLICATION

EFFICIENT DISPERSION

Applying quantitative methods to examine the spread of ideology in judicial opinion

In a recent paper, co-authors ELLIOTT ASH, DANIEL L. CHEN, and SURESH NAIDU provide a quantitative analysis of the judicial effects of the law and economics movement. Comparing attendance at seminars run by the Manne Economics Institute for Federal Judges from 1976 to 1999 against 380,000 circuit court cases and one million criminal sentencing decisions in district courts, the authors identify both the effects on judicial decision-making and the dispersion of economic language and reasoning throughout the federal judiciary.

“Economics-trained judges significantly impact U.S. judicial outcomes. They render conservative votes and verdicts, are against regulation and criminal appeals, and mete harsher criminal sentences and deterrence reasoning. When ideas move from economics into law, ideas have consequences. Economics likely changed how judges perceived the consequences of their decisions. If you teach judges that markets work, they deregulate government. If you teach judges that deterrence works, they become harsher to criminal defendants. Economics training focusing on efficiency may have crowded out other constitutional theories of interpretation. Economics training accounts for a substantial portion of the conservative shift in the federal judiciary since 1976.”

Link to the paper.

  • Henry Farrell at Crooked Timber picks out some additional highlights. Link.
  • A Washington Post article from January 1980 provides some contemporaneous context on the Manne seminars. Link.
  • In a relevant 2015 paper, Pedro Bordalo, Nicola Gennaioli, and Andrei Shleifer apply salience theory to model judicial decision-making: "The context of the judicial decision, which is comparative by nature, shapes which aspects of the case stand out and draw the judge’s attention. By focusing judicial attention on such salient aspects of the case, legally irrelevant information can affect judicial decisions." Link.
⤷ Full Article

October 20th, 2018

Action Plan

ZONING FAMILIES | BIG DATA BLACKLISTING | NEW JFI PUBLICATION

WHAT IS A FAMILY?

Competing definitons of the term have vast policy implications

The formal definition of family is “blood, marriage, or adoption,” but that leaves out many possible arrangements, including families of unmarried people, foster children, co-ops, and, until 2015, gay partnerships. In the 1970s, family law became more open to “functional families” outside the formal definition, while zoning law kept to the strictly formal. Legal historian KATE REDBURN writes, “These contradictions leave critical family law doctrines unstable in thirty-two states.”

In a recent working paper, Redburn examines how these changes came to be, and looks more generally at how legal regimes exist within connected networks and influence each other despite traditional boundaries of scale (local, state, etc.) and subject (family law, zoning law):

“Viewed through a broader lens, this story might suggest lessons for law and social movements. While progressives oriented their campaigns at the state level, homeowners imbued local governance with conservative social politics in defense of their prejudices and property values. Neither movement, nor the judges adjudicating their case, nor the legislators revising state and local statutes, paid adequate attention to the interlocking nature of legal doctrines, rendering their movements less successful than they have previously appeared. Though we tend to think of legal fields as distinct regimes, ignoring the multifaceted ways that doctrines overlap, connect, and contradict each other can have perilous consequences. Their blind spot has has grown to encompass millions of Americans.”

Redburn’s case study provides ample evidence that micro-level legal conflicts can uphold and alter legal understandings:

“Motivated constituencies of voters and their elected representatives can produce legal change quite out of sync with social trends. Such was the case in the zoning definition of family in the late 1960s and early 1970s. Despite social change resulting in more functional families, protective homeowners and the conservative movement successfully shifted zoning law away from the functional family approach.”

⤷ Full Article

October 18th, 2018

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article

September 22nd, 2018

Red Wall

AI LABOR CHAIN | BIOETHICS

MATERIAL UNDERSTANDING

The full resource stack needed for Amazon's Echo to "turn on the lights"

In a novel new project, KATE CRAWFORD and VLADAN JOLER present an "anatomical case study" of the human labor, data, and planetary resources necessary for the functioning of an Amazon Echo. A 21-part essay accompanies an anatomical map (pictured above), making the case for the importance of understanding the complex resource networks that make up the "technical infrastructures" threaded through daily life:

"At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.

The stack that is required to interact with an Amazon Echo goes well beyond the multi-layered 'technical stack' of data modeling, hardware, servers and networks. The full stack reaches much further into capital, labor and nature, and demands an enormous amount of each. Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch."

Link to the full essay and map.

  • More on the nuanced ethical dilemmas of digital technology: "Instead of being passive victims of (digital) technology, we create technology and the material, conceptual, or ethical environments, possibilities, or affordances for its production of use; this makes us also responsible for the space of possibilities that we create." Link.
  • As shared in our April newsletter, Tim Hwang discusses how hardware influences the progress and development of AI. Link.
⤷ Full Article

July 21st, 2018

High Noon

ACTUARIAL RISK ASSESSMENT | SOCIAL SCIENCE DATA

ALTERNATIVE ACTUARY

History of risk assessment, and some proposed alternate methods 

A 2002 paper by ERIC SILVER and LISA L. MILLER on actuarial risk assessment tools provides a history of statistical prediction in the criminal justice context, and issues cautions now central to the contemporary algorithmic fairness conversations:  

"Much as automobile insurance policies determine risk levels based on the shared characteristics of drivers of similar age, sex, and driving history, actuarial risk assessment tools for predicting violence or recidivism use aggregate data to estimate the likelihood that certain strata of the population will commit a violent or criminal act. 

To the extent that actuarial risk assessment helps reduce violence and recidivism, it does so not by altering offenders and the environments that produced them but by separating them from the perceived law-abiding populations. Actuarial risk assessment facilitates the development of policies that intervene in the lives of citizens with little or no narrative of purpose beyond incapacitation. The adoption of risk assessment tools may signal the abandonment of a centuries-long project of using rationality, science, and the state to improve upon the social and economic progress of individuals and society."

Link to the paper.

A more recent paper presented at FAT* in 2018 and co-authored by CHELSEA BARABAS, KARTHIK DINAKAR, JOICHI ITO, MADARS VIRZA, and JONATHAN ZITTRAIN makes several arguments reminiscent of Silver and Miller's work. They argue in favor of causal inference framework for risk assessments aimed at working on the question "what interventions work":

"We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose.… Data-driven tools provide an immense opportunity for us to pursue goals of fair punishment and future crime prevention. But this requires us to move away from merely tacking on intervenable variables to risk covariates for predictive models, and towards the use of empirically-grounded tools to help understand and respond to the underlying drivers of crime, both individually and systemically."

Link to the paper. 

  • In his 2007 book Against Prediction, lawyer and theorist Bernard Harcourt provided detailed accounts and critiques of the use of actuarial methods throughout the criminal legal system. In place of prediction, Harcourt proposes a conceptual and practical alternative: randomization. From a 2005 paper on the same topic: "Instead of embracing the actuarial turn in criminal law, we should rather celebrate the virtues of the random: randomization, it turns out, is the only way to achieve a carceral population that reflects the offending population. As a form of random sampling, randomization in policing has significant positive value: it reinforces the central moral intuition in the criminal law that similarly situated individuals should have the same likelihood of being apprehended if they offend—regardless of race, ethnicity, gender or class." Link to the paper. (And link to another paper of Harcourt's in the Federal Sentencing Reporter, "Risk as a Proxy for Race.") 
  • A recent paper by Megan Stevenson assesses risk assessment tools: "Despite extensive and heated rhetoric, there is virtually no evidence on how use of this 'evidence-based' tool affects key outcomes such as incarceration rates, crime, or racial disparities. The research discussing what 'should' happen as a result of risk assessment is hypothetical and largely ignores the complexities of implementation. This Article is one of the first studies to document the impacts of risk assessment in practice." Link
  • A compelling piece of esoterica cited in Harcourt's book: a doctoral thesis by Deborah Rachel Coen on the "probabilistic turn" in 19th century imperial Austria. Link.
⤷ Full Article

July 14th, 2018

Traveling Light

DATA OWNERSHIP BY CONSUMERS | CLIMATE AND CULTURAL CHANGE

DATA IS NONRIVAL

Considerations on data sharing and data markets 

CHARLES I. JONES and CHRISTOPHER TONETTI contribute to the “new but rapidly-growing field” known as the economics of data:

“We are particularly interested in how different property rights for data determine its use in the economy, and thus affect output, privacy, and consumer welfare. The starting point for our analysis is the observation that data is nonrival. That is, at a technological level, data is not depleted through use. Most goods in economics are rival: if a person consumes a kilogram of rice or an hour of an accountant’s time, some resource with a positive opportunity cost is used up. In contrast, existing data can be used by any number of firms or people simultaneously, without being diminished. Consider a collection of a million labeled images, the human genome, the U.S. Census, or the data generated by 10,000 cars driving 10,000 miles. Any number of firms, people, or machine learning algorithms can use this data simultaneously without reducing the amount of data available to anyone else. The key finding in our paper is that policies related to data have important economic consequences.”

After modeling a few different data-ownership possibilities, the authors conclude, “Our analysis suggests that giving the data property rights to consumers can lead to allocations that are close to optimal.” Link to the paper.

  • Jones and Tonetti cite an influential 2015 paper by Alessandro Acquisti, Curtis R. Taylor, and Liad Wagman on “The Economics of Privacy”: “In digital economies, consumers' ability to make informed decisions about their privacy is severely hindered, because consumers are often in a position of imperfect or asymmetric information regarding when their data is collected, for what purposes, and with what consequences.” Link.
  • For more on data populi, Ben Tarnoff has a general-interest overview in Logic Magazine, including mention of the data dividend and a comparison to the Alaska Permanent Fund. Tarnoff uses the oil industry as an analogy throughout: “In the oil industry, companies often sign ‘production sharing agreements’ (PSAs) with governments. The government hires the company as a contractor to explore, develop, and produce the oil, but retains ownership of the oil itself. The company bears the cost and risk of the venture, and in exchange receives a portion of the revenue. The rest goes to the government. Production sharing agreements are particularly useful for governments that don’t have the machinery or expertise to exploit a resource themselves.” Link.
⤷ Full Article

July 7th, 2018

Quodlibet

RANDOMIZED CONTROLLED TRIALS | HIERARCHY & DESPOTISM

EVIDENCE PUZZLES

The history and politics of RCTs 

In a 2016 working paper, JUDITH GUERON recounts and evaluates the history of randomized controlled trials (RCTs) in the US, through her own experience in the development of welfare experiments through the MDRC and the HHS: 

“To varying degrees, the proponents of welfare experiments at MDRC and HHS shared three mutually reinforcing goals. The first was to obtain reliable and—given the long and heated controversy about welfare reform—defensible evidence of what worked and, just as importantly, what did not. Over a pivotal ten years from 1975 to 1985, these individuals became convinced that high-quality RCTs were uniquely able to produce such evidence and that there was simply no adequate alternative. Thus, their first challenge was to demonstrate feasibility: that it was ethical, legal, and possible to implement this untried—and at first blush to some people immoral—approach in diverse conditions. The other two goals sprang from their reasons for seeking rigorous evidence. They were not motivated by an abstract interest in methodology or theory; they wanted to inform policy and make government more effective and efficient. As a result, they sought to make the body of studies useful, by assuring that it addressed the most significant questions about policy and practice, and to structure the research and communicate the findings in ways that would increase the potential that they might actually be used." 

⤷ Full Article

June 23rd, 2018

Yielding Stone

FAIRNESS IN ALGORITHMIC DECISION-MAKING | ADMINISTRATIVE DATA ACCESS

VISIBLE CONSTRAINT

Including protected variables can make algorithmic decision-making more fair 

A recent paper co-authored by JON KLEINBERG, JENS LUDWIG, SENDHIL MULLAINATHAN, and ASHESH RAMBACHAN addresses algorithmic bias, countering the "large literature that tries to 'blind' the algorithm to race to avoid exacerbating existing unfairness in society":  

"This perspective about how to promote algorithmic fairness, while intuitive, is misleading and in fact may do more harm than good. We develop a simple conceptual framework that models how a social planner who cares about equity should form predictions from data that may have potential racial biases. Our primary result is exceedingly simple, yet often overlooked: a preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (such as setting a different threshold for different groups) but the estimated prediction function itself should not change. Absent legal constraints, one should include variables such as gender and race for fairness reasons.

Our argument collects together and builds on existing insights to contribute to how we should think about algorithmic fairness.… We empirically illustrate this point for the case of using predictions of college success to make admissions decisions. Using nationally representative data on college students, we underline how the inclusion of a protected variable—race in our application—not only improves predicted GPAs of admitted students (efficiency), but also can improve outcomes such as the fraction of admitted students who are black (equity).

Across a wide range of estimation approaches, objective functions, and definitions of fairness, the strategy of blinding the algorithm to race inadvertently detracts from fairness."

Read the full paper here.

⤷ Full Article

June 16th, 2018

Phantom Perspective

STUDENT LIST DATA | ADJUSTING FOR AUTOMATION

A new report from Fordham CLIP sheds light on the market for student list data from higher education institutions

From the paper authored by N. CAMERON RUSSELL, JOEL R. REIDENBERG, ELIZABETH MARTIN, and THOMAS NORTON of the FORDHAM CENTER ON LAW AND INFORMATION POLICY: 

“Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

This information is being collected, marketed, and sold about individuals because they are students."

Drawing from publicly-available sources, public records requests from educational institutions, and marketing materials sent to high school students gathered over several years, the study paints an unsettling portrait of the murky market for student list data, and makes recommendations for regulatory response: 

  1. The commercial marketplace for student information should not be a subterranean market. Parents, students, and the general public should be able to reasonably know (i) the identities of student data brokers, (ii) what lists and selects they are selling, and (iii) where the data for student lists and selects derives. A model like the Fair Credit Reporting Act (FCRA) should apply to compilation, sale, and use of student data once outside of schools and FERPA protections. If data brokers are selling information on students based on stereotypes, this should be transparent and subject to parental and public scrutiny.
  2. Brokers of student data should be required to follow reasonable procedures to assure maximum possible accuracy of student data. Parents and emancipated students should be able to gain access to their student data and correct inaccuracies. Student data brokers should be obligated to notify purchasers and other downstream users when previously-transferred data is proven inaccurate and these data recipients should be required to correct the inaccuracy.
  3. Parents and emancipated students should be able to opt out of uses of student data for commercial purposes unrelated to education or military recruitment.
  4. When surveys are administered to students through schools, data practices should be transparent, students and families should be informed as to any commercial purposes of surveys before they are administered, and there should be compliance with other obligations under the Protection of Pupil Rights Amendment (PPRA)."
⤷ Full Article