August 1st, 2019

Decentralize What?

Can you fix political problems with new web infrastructures?

The internet's early proliferation was steeped in cyber-utopian ideals. The circumvention of censorship and gatekeeping, digital public squares, direct democracy, revitalized civic engagement, the “global village”—these were all anticipated characteristics of the internet age, premised on the notion that digital communication would provide the necessary conditions for the world to change. In a dramatic reversal, we now associate the internet era with eroding privacy, widespread surveillance, state censorship, asymmetries of influence, and monopolies of attention—exacerbations of the exact problems it portended to fix.

Such problems are frequently understood as being problems of centralization—both infrastructural and political. If mass surveillance and censorship are problems of combined infrastructural and political centralization, then decentralization looks like a natural remedy. In the context of the internet, decentralization generally refers to peer-to-peer (p2p) technologies. In this post, I consider whether infrastructural decentralization is an effective way to counter existing regimes of political centralization. The cyber-utopian dream failed to account for the exogenous pressures that would shape the internet—the rosy narrative of infrastructural decentralization seems to be making a similar misstep.

 Full Article

July 3rd, 2019

The Politics of Machine Learning, pt. II

The uses of algorithms discussed in the first part of this article vary widely: from hiring decisions to bail assignment, to political campaigns and military intelligence.

Across all these applications of machine learning methods, there is a common thread: Data on individuals is used to treat different individuals differently. In the past, broadly speaking, such commercial and government activities used to target everyone in a given population more or less similarly—the same advertisements, the same prices, the same political slogans. More and more now, everyone gets personalized advertisements, personalized prices, and personalized political messages. New inequalities are created and new fragmentations of discourse are introduced.

Is that a problem? Well, it depends. I will discuss two types of concerns. The first type, relevant in particular to news and political messaging, is that the differentiation of messages is by itself a source of problems.

 Full Article

June 27th, 2019

The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

 Full Article

May 31st, 2019

Copyright Humanism

It's by now common wisdom that American copyright law is burdensome, excessive, and failing to promote the ideals that protection ought to. Too many things, critics argue, are subject to copyright protections, and the result is an inefficient legal morass that serves few benefits to society and has failed to keep up with the radical transformations in technology and culture of the last several decades. To reform and streamline our copyright system, the thinking goes, we need to get rid of our free-for-all regime of copyrightability and institute reasonable barriers to protection.

But what if these commentators are missing the forest for the trees, and America's frequently derided copyright regime is actually particularly well-suited to the digital age? Could copyright protections—applied universally at the moment of authorship—provide a level of autonomy that matches the democratization of authorship augured by the digital age?

 Full Article

March 9th, 2019

Incomplete Squares

CONTEXT ALLOCATION

Expanding the frame for formalizing fairness

In the digital ethics literature, there's a consistent back-and-forth between attempts at designing algorithmic tools that promote fair outcomes in decision-making processes, and critiques that enumerate the limits of such attempts. A December paper by ANDREW SELBST, dana boyd, SORELLE FRIEDLER, SURESH VENKATASUBRAMANIAN, and JANET VERTESI—delivered at FAT* 2019—contributes to the latter genre. The authors build on insights from Science and Technology Studies and offer a list of five "traps"—Framing, Portability, Formalism, Ripple Effect, and Solutionism—that fair-ML work is susceptible to as it aims for context-aware systems design. From the paper:

"We contend that by abstracting away the social context in which these systems will be deployed, fair-ML researchers miss the broader context, including information necessary to create fairer outcomes, or even to understand fairness as a concept. Ultimately, this is because while performance metrics are properties of systems in total, technical systems are subsystems. Fairness and justice are properties of social and legal systems like employment and criminal justice, not properties of the technical tools within. To treat fairness and justice as terms that have meaningful application to technology separate from a social context is therefore to make a category error, or as we posit here, an abstraction error."

In their critique of what is left out in the formalization process, the authors argue that, by "moving decisions made by humans and human institutions within the abstraction boundary, fairness of the system can again be analyzed as an end-to-end property of the sociotechnical frame." Link to the paper.

  • A brand new paper by HODA HEIDARI, VEDANT NANDA, and KRISHNA GUMMADI attempts to produce fairness metrics that look beyond "allocative equality," and directly grapples with the above mentioned "ripple effect trap." The authors "propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the underlying population." Link.
  • In the footnotes to the paper by Selbst et al, a 1997 chapter by early AI researcher turned sociologist Phil Agre. In the chapter: institutional and intellectual history of early AI; sociological study of the AI field at the time; Agre’s departure from the field; discussions of developing "critical technical practice." Link.
 Full Article

February 2nd, 2019

The Summit

LEGITIMATE ASSESSMENT

Moving beyond computational questions in digital ethics research

In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.

A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:

“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.

In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.

Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy"

 Full Article

January 12th, 2019

Worldviews

SOFT CYBER

Another kind of cybersecurity risk: the destruction of common knowledge

In a report for the Berkman Klein center, Henry Farrell and Bruce Schneier identify a gap in current approaches to cybersecurity. National cybersecurity officials still base their thinking on Cold War-type threats, where technologists focus on hackers. Combining both approaches, Farrell and Schneier make a wider argument about collective knowledge in democratic systems—and the dangers of its diminishment.

From the abstract:

"We demonstrate systematic differences between how autocracies and democracies work as information systems, because they rely on different mixes of common and contested political knowledge. Stable autocracies will have common knowledge over who is in charge and their associated ideological or policy goals, but will generate contested knowledge over who the various political actors in society are, and how they might form coalitions and gain public support, so as to make it more difficult for coalitions to displace the regime. Stable democracies will have contested knowledge over who is in charge, but common knowledge over who the political actors are, and how they may form coalitions and gain public support... democracies are vulnerable to measures that 'flood' public debate and disrupt shared decentralized understandings of actors and coalitions, in ways that autocracies are not."

One compelling metaresearch point from the paper is that autocratic governments receive analysis of information trade-offs, while democratic governments do not:

"There is existing research literature on the informational trade-offs or 'dictators' dilemmas' that autocrats face, in seeking to balance between their own need for useful information and economic growth, and the risk that others can use available information to undermine their rule. There is no corresponding literature on the informational trade-offs that democracies face between desiderata like availability and stability."

Full paper available on SSRN here.

  • Farrell summarizes the work on Crooked Timber: "In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society." Many substantive comments follow. Link.
  • Jeremy Wallace, an expert on authoritarianism, weighs in on Twitter: "Insiders, inevitably, have even more information about the contours of these debates. On the other hand, there's a lot that dictators don't know--about their own regimes, the threats that they are facing, etc." Link to Wallace's work on the topic.
  • Related reading recommended by Wallace, from Daniel Little, a 2016 paper on propaganda: "Surprisingly, the government tends to pick a high level of propaganda precisely when it is ineffective." Link.
 Full Article

December 15th, 2018

Space Dance

SCIENTIFIC RETURNS

A new book examines the economic and social impacts of R&D

Last May, we highlighted a report on workforce training and technological competitiveness which outlined trends in research and development investment. The report found that despite "total U.S. R&D funding reaching an all-time high in 2015," it's shifted dramatically to the private sector: "federal funding for R&D, which goes overwhelmingly to basic scientific research, has declined steadily and is now at the lowest level since the early 1950s." This week, we take a look at the returns to these investments and discuss how best to measure and trace the ways research spending affects economic activity and policy.

In the most recent Issues in Science and Technology, IRWIN FELLER reviewsMeasuring the Economic Value of Research, a technical monograph that discusses how best to measure the impact and value of research on policy objectives. Notably, the book highlights UMETRICS, a unified dataset from a consortium of universities "that can be used to better inform decisions relating to the level, apportionment, human capital needs, and physical facility requirements of public investments in R&D and the returns of these investments." While it represents a big data approach to program evaluation, Feller notes that UMETRICS' strength is in the "small data, theory-driven, and exacting construction of its constituent datasets," all of which offer insight into the importance of human capital in successful R&D:

"The book’s characterization of the ways in which scientific ideas are transmitted to and constitute value to the broader economy encompasses publications and patents, but most importantly includes the employment of people trained in food safety research. This emphasis on human capital reflects a core proposition of UMETRICS, namely the 'importance of people—students, principal investigators, postdoctoral researchers, and research staff—who conduct research, create new knowledge, and transmit that knowledge into the broader economy.'

In particular, the chapters on workforce dynamics relating to employment, earnings, occupations, and early careers highlight the nuanced, disaggregated, and policy-relevant information made possible by UMETRICS. These data provide much-needed reinforcement to the historic proposition advanced by research-oriented universities that their major contribution to societal well-being—economic and beyond—is through the joint production of research and graduate education, more than patents or other metrics of technology transfer or firm formation."

The UMETRICS dataset traces the social and economic returns of research universities and allows for a larger examination of universities as sociopolitical anchors and scientific infrastructure.

 Full Article

November 10th, 2018

Two Figures

NEW UBI REPORTS | ELECTORAL VIOLENCE | BEYOND GDP

DISCRETION DIFFERENTIAL

On the varying modes of conceiving of privacy (and its violation) in the law

In a 2004 YALE LAW JOURNAL article, comparative legal scholar JAMES Q. WHITMAN explores differing cultural and legal postures toward privacy. Through his comparison, he draws a slim taxonomy: privacy rights are founded on either dignity (throughout Western Europe) or on liberty (in the United States). The distinction—while far from perfectly neat either historically or in the present—raises a number of interesting questions about privacy law that are currently being worked out as scholars and legislators move forward in the creation and implementation of digital governance procedures. From the paper:

"If privacy is a universal human need that gives rise to a fundamental human right, why does it take such disconcertingly diverse forms? This is a hard problem for privacy advocates who want to talk about the values of ‘personhood,’ harder than they typically acknowledge. It is a hard problem because of the way they usually try to make their case: Overwhelmingly, privacy advocates rely on what moral philosophers call ‘intuitionist’ arguments. In their crude form, these sorts of arguments suppose that human beings have a direct, intuitive grasp of right and wrong—an intuitive grasp that can guide us in our ordinary ethical decisionmaking. The typical privacy article rests its case precisely on an appeal to its reader’s intuitions and anxieties about the evils of privacy violations. Imagine invasions of your privacy, the argument runs. Do they not seem like violations of your very personhood?

Continental privacy protections are, at their very core, a form of protection of a right to respect andpersonal dignity. The core continental privacy rights are rights to one’s image, name, and reputation, and what Germans call the right to informational self-determination—the right to control the sorts of information disclosed about oneself. They are all rights to control your public image.

By contrast, America is much more oriented to values of liberty. At its conceptual core, the American right to privacy is the right to freedom of intrusions by the state, especially in one’s own home."

Link to the paper.

  • Forthcoming in the Harvard Journal of Law & Technology, an in-depth review of the significance of the Supreme Court's June decision in Carpenter v. United States: "Carpenter holds that the police may not collect historical [cellphone location tracking data] from a cellphone provider without a warrant. This is the opinion most privacy law scholars and privacy advocates have been awaiting for decades." Link.
  • An excellent repository of scholarship on the GDPR—the new European data protection law—from the journal International Data Privacy Law. Link.
  • Danielle Citron and Daniel Solove's 2016 paper explores how US courts have dealt with legal standards of harm—anxiety or risk—in cases of personal data breaches. Link. See also Ryan Calo's 2010 article "The Boundaries of Privacy Harm." Link.
  • Khiara Bridges' 2017 book The Poverty of Privacy Rights provides a corrective to universalist claims to a right to privacy: "Poor mothers actually do not possess privacy rights. This is the book’s strong claim." Link to the book page, link to the introductory chapter.
 Full Article

October 27th, 2018

The Seasons

EFFICIENT DISPERSION

Applying quantitative methods to examine the spread of ideology in judicial opinion

In a recent paper, co-authors ELLIOTT ASH, DANIEL L. CHEN, and SURESH NAIDU provide a quantitative analysis of the judicial effects of the law and economics movement. Comparing attendance at seminars run by the Manne Economics Institute for Federal Judges from 1976 to 1999 against 380,000 circuit court cases and one million criminal sentencing decisions in district courts, the authors identify both the effects on judicial decision-making and the dispersion of economic language and reasoning throughout the federal judiciary.

“Economics-trained judges significantly impact U.S. judicial outcomes. They render conservative votes and verdicts, are against regulation and criminal appeals, and mete harsher criminal sentences and deterrence reasoning. When ideas move from economics into law, ideas have consequences. Economics likely changed how judges perceived the consequences of their decisions. If you teach judges that markets work, they deregulate government. If you teach judges that deterrence works, they become harsher to criminal defendants. Economics training focusing on efficiency may have crowded out other constitutional theories of interpretation. Economics training accounts for a substantial portion of the conservative shift in the federal judiciary since 1976.”

Link to the paper.

  • Henry Farrell at Crooked Timber picks out some additional highlights. Link.
  • A Washington Post article from January 1980 provides some contemporaneous context on the Manne seminars. Link.
  • In a relevant 2015 paper, Pedro Bordalo, Nicola Gennaioli, and Andrei Shleifer apply salience theory to model judicial decision-making: "The context of the judicial decision, which is comparative by nature, shapes which aspects of the case stand out and draw the judge’s attention. By focusing judicial attention on such salient aspects of the case, legally irrelevant information can affect judicial decisions." Link.
 Full Article