↳ Digital Ethics

October 18th, 2018

↳ Digital Ethics

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article