↳ Digital Ethics

July 3rd, 2019

↳ Digital Ethics

The Politics of Machine Learning, pt. II

The uses of algorithms discussed in the first part of this article vary widely: from hiring decisions to bail assignment, to political campaigns and military intelligence.

Across all these applications of machine learning methods, there is a common thread: Data on individuals is used to treat different individuals differently. In the past, broadly speaking, such commercial and government activities used to target everyone in a given population more or less similarly—the same advertisements, the same prices, the same political slogans. More and more now, everyone gets personalized advertisements, personalized prices, and personalized political messages. New inequalities are created and new fragmentations of discourse are introduced.

Is that a problem? Well, it depends. I will discuss two types of concerns. The first type, relevant in particular to news and political messaging, is that the differentiation of messages is by itself a source of problems.

⤷ Full Article

June 27th, 2019

The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

⤷ Full Article

May 31st, 2019

Copyright Humanism

It's by now common wisdom that American copyright law is burdensome, excessive, and failing to promote the ideals that protection ought to. Too many things, critics argue, are subject to copyright protections, and the result is an inefficient legal morass that serves few benefits to society and has failed to keep up with the radical transformations in technology and culture of the last several decades. To reform and streamline our copyright system, the thinking goes, we need to get rid of our free-for-all regime of copyrightability and institute reasonable barriers to protection.

But what if these commentators are missing the forest for the trees, and America's frequently derided copyright regime is actually particularly well-suited to the digital age? Could copyright protections—applied universally at the moment of authorship—provide a level of autonomy that matches the democratization of authorship augured by the digital age?

⤷ Full Article

October 18th, 2018

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article