↳ Algorithms

January 29th, 2020

↳ Algorithms

Historicizing the Self-Evident

An interview with Lorraine Daston

Lorraine Daston has published widely in the history of science, including on probability and statistics, scientific objectivity and observation, game theory, monsters, and much else. Director at the Max Planck Institute for the History of Science since 1995 (emeritus as of Spring 2019), she is the author and co-author of over a dozen books, each stunning in scope and detail, and each demonstrating of her ability to make the common uncommon—to illuminate what she calls the “history of the self-evident.”

Amidst the ever expanding reach of all varieties of quantification and algorithmic formalization, both the earliest of Daston's works (the 1988 book Classical Probability in the Enlightenment) and her most recent (an ongoing project on the history of rules) perform this task, uncovering the contingencies that swirled around the invention of mathematical probability, and the rise of algorithmic rule-making.

We spoke over the phone to discuss the labor of calculation, the various emergences of formal rationality, and the importance of interdisciplinarity in the social sciences. Our conversation was edited for length and clarity.

⤷ Full Article

August 5th, 2019

Where is the Artist?

COMPETING VALUES

The state of a new pedagogical field

Technology companies are coming under increased scrutiny for the ethical consequences of their work, and some have formed advisory boards or hired ethicists on staff. (Google's AI ethics board quickly disintegrated.) Another approach is to train computer scientists in ethics before they enter the labor market. But how should that training—which must combine practice and theory across disciplines—be structured, who should teach the courses, and what should they teach?

This month’s cover story of the Communications of the Association for Computing Machinery describes the Embedded EthiCS program at Harvard. (David Gray Grant, a JFI fellow since 2018, and Lily Hu, a new JFI fellow, are co-authors, along with Barbara J. Grosz, Kate Vredenburgh, Jeff Behrends, Alison Simmons, and Jim Waldo.) The article explains the advantages of their approach, wherein philosophy PhD students and postdocs teach modules in computer science classes:

"In contrast to stand-alone computer ethics or computer-and-society courses, Embedded EthiCS employs a distributed pedagogy that makes ethical reasoning an integral component of courses throughout the standard computer science curriculum. It modifies existing courses rather than requiring wholly new courses. Students learn ways to identify ethical implications of technology and to reason clearly about them while they are learning ways to develop and implement algorithms, design interactive systems, and code. Embedded EthiCS thus addresses shortcomings of stand-alone courses. Furthermore, it compensates for the reluctance of STEM faculty to teach ethics on their own by embedding philosophy graduate students and postdoctoral fellows into the teaching of computer science courses."

A future research direction is to examine "the approach's impact over the course of years, for instance, as students complete their degrees and even later in their careers."

Link to the full article.

  • Shannon Vallor and Arvind Narayanan have a free ethics module anyone can use in a CS course. View it here. A Stephanie Wykstra piece in the Nation on the state of DE pedagogy notes that the module has been used at 100+ universities. Link.
  • In February 2018, we wrote about Casey Fiesler’s spreadsheet of tech ethics curricula, which has gotten even more comprehensive, including sample codes of ethics and other resources. Jay Hodges’s comment is still relevant for many of the curricula: "Virtually every discipline that deals with the social world – including, among others, sociology, social work, history, women’s studies, Africana studies, Latino/a studies, urban studies, political science, economics, epidemiology, public policy, and law – addresses questions of fairness and justice in some way. Yet the knowledge accumulated by these fields gets very little attention in these syllabi." Link to that 2018 letter.
  • At MIT, JFI fellow Abby Everett Jacques teaches "Ethics of Technology." An NPR piece gives a sense of the students' experiences. Link.
⤷ Full Article

December 16th, 2017

Bruised Grid

HOW TO HANDLE BAD CONTENT

Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

The Social Capital newsletter responds:

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

⤷ Full Article