October 21st, 2019

Sunset Red and Gold-- The Gondolier

DESCRIBED FUTURES

Automation fears and realities

Of the many justifications for introducing a universal basic income, automation is among the most popular. Over the past years, a slew of reports and endless media coverage has raised the specter of mass "technological unemployment"—a possible future that has been taken up by basic income proponents across the political spectrum. It was even a point of argument in this week's Democratic presidential debate.

In the first of a two-part series, historian AARON BENANAV (whose work on the history of unemployment categories we shared in a previous letter) critiques and situates the automation debates within long-term global trends. Framed as a response to what Benanav terms the "automation theorists," who maintain a sense of inevitability about the robot takeover, the paper pursues alternate explanations: declining labor demand, global deindustrialization, and manufacturing overcapacity.

From the paper:

“Automation turns out to be a constant feature of the history of capitalism. By contrast, the discourse around automation, which extrapolates from instances of technological change to a broader social theory, is not constant; it periodically recurs in modern history.

The return of automation discourse is a symptom of our era, as it was in times past: it arises when the global economy’s failure to create enough jobs causes people to question its fundamental viability. The breakdown of this market mechanism today is more extreme than at any time in the past. This is because a greater share of the world’s population than ever before depends on selling its labour or the simple products of its labour to survive, in the context of weakening global economic growth.”

Link to the paper, and link to an ungated version on the author's website.

  • David Autor's 2016 paper "Paradox of Abundance" examines the problem of its title: "technological changes threatens social welfare not because it intensifies scarcity but because it augments abundance." Link.
  • A previous newsletter highlights a paper by legal scholar Brishen Rogers, which critiques automation fears in the US context by pointing to labor law and the "fissuring" of the workforce as more consequential for stagnating wages and declining job security. Link. Along the same lines, but in the European context, Zachary Parolin's recent work for the OECD measures the effects of collective bargaining agreements on wages in automatable occupations. Link.
  • Three post-debate accounts of the issue: Paul Krugman in the Times; Matt Yglesias in Vox; and Jordan Weisman in Slate, featuring the following quote from David Autor: "If we talk about the economic trauma of the 2000s, that’s not primarily due to automation. Nobody can tell you what great invention happened in 1999 that wiped out 20 percent of manufacturing jobs."
  • For another broad view of macro trends and low-demand problems, see JW Mason's "Macroeconomic Lessons from the Past Decade." Link.
 Full Article

October 7th, 2019

The Balloon

DIFFERENCE ENGINE

Labor and mechanized calculation

Breathless media coverage of machine learning tools and their applications often obscures the processes that allow them to function. Time and again, services billed or understood by users as automatic are revealed to rely on undervalued, deskilled human labor.

There is rich historical precedent for the presence of these "ghosts in the machine." In a 2017 lecture, Director Emirata of the Max Planck Institute for the History of Science LORRAINE DASTON examines the emergence of mechanical calculation, revealing a fascinating history of the interaction between new technologies and the methods of routinizing and dividing intellectual labor that emerges alongside them.

From the introduction:

"The intertwined histories of the division of labor and mechanical intelligence neither began nor ended with this famous three-act story from pins to computers via logarithms. Long before Prony thought of applying Adam Smith’s political economy to monumental calculation projects, astronomical observatories and nautical almanacs were confronted with mountains of computations that they accomplished by the ingenious organization of work and workers. What mechanization did change was the organization of Big Calculation: integrating humans and machines dictated different algorithms, different skills, different personnel, and above all different divisions of labor. These changes in turn shaped new forms of intelligence at the interface between humans and machines."

Link to the paper version of the lecture. (And stay tuned to the Phenomenal World for our upcoming interview with Daston.)

  • A 1994 paper by Daston entitled "Enlightenment Calculations" gives specific attention to the logarithmic tables of Gaspard de Prony, which sought to demonstrate the usefulness of the newly-invented metric system: "The tables marked an epoch in the history of calculation but also one in the history of intelligence and work." Link.
  • Matthew L. Jones, an historian at Columbia University, studies the history of calculation and computing. His 2016 book Reckoning with Matter: Calculating Machines, Innovation, and Thinking about Thinking from Pascal to Babbage traces the history of attempts to routinize, mechanize and apply the power of calculation. Link to the book, link to Lorraine Daston's review in Critical Inquiry.
  • Simon Schaffer's 1996 paper on the relationship between Charles Babbage's calculating engine and the contemporaneously emerging factory system. Link.
  • A syllabus prepared by Mary L. Gray and Siddharth Suri, authors of Ghost Work—a book about the "hidden" labor force behind many tech services—surveys the tech platform subcontracting labor market. Link.
 Full Article

September 9th, 2019

Original & Forgery

MULTIPLY EFFECT

The difficulties of causal reasoning and race

While the thorny ethical questions dogging the development and implementation of algorithmic decision systems touch on all manner of social phenomena, arguably the most widely discussed is that of racial discrimination. The watershed moment for the algorithmic ethics conversation was ProPublica's 2016 article on the COMPAS risk-scoring algorithm, and a huge number of ensuing papers in computer science, law, and related disciplines attempt to grapple with the question of algorithmic fairness by thinking through the role of race and discrimination in decision systems.

In a paper from earlier this year, ISSA KOHLER-HAUSMAN of Yale Law School examines the way that race and racial discrimination are conceived of in law and the social sciences. Challenging the premises of an array of research across disciplines, Kolher-Hausmann argues for both a reassessment of the basis of reasoning about discrimination, and a new approach grounded in a social constructivist view of race.

From the paper:

"This Article argues that animating the most common approaches to detecting discrimination in both law and social science is a model of discrimination that is, well, wrong. I term this model the 'counterfactual causal model' of race discrimination. Discrimination, on this account, is detected by measuring the 'treatment effect of race,' where treatment is conceptualized as manipulating the raced status of otherwise identical units (e.g., a person, a neighborhood, a school). Discrimination is present when an adverse outcome occurs in the world in which a unit is 'treated' by being raced—for example, black—and not in the world in which the otherwise identical unit is 'treated' by being, for example, raced white. The counterfactual model has the allure of precision and the security of seemingly obvious divisions or natural facts.

Currently, many courts, experts, and commentators approach detecting discrimination as an exercise measuring the counterfactual causal effect of race-qua-treatment, looking for complex methods to strip away confounding variables to get at a solid state of race and race alone. But what we are arguing about when we argue about whether or not statistical evidence provides proof of discrimination is precisely what we mean by the concept DISCRIMINATION."

Link to the article. And stay tuned for a forthcoming post on the Phenomenal World by JFI fellow Lily Hu that grapples with these themes.

  • For an example of the logic Kohler-Hausmann is writing against, see Edmund S. Phelps' 1972 paper "The Statistical Theory of Racism and Sexism." Link.
  • A recent paper deals with the issue of causal reasoning in an epidemiological study: "If causation must be defined by intervention, and interventions on race and the whole of SeS are vague or impractical, how is one to frame discussions of causation as they relate to this and other vital issues?" Link.
  • From Kohler-Hausmann's footnotes, two excellent works informing her approach: first, the canonical book Racecraft by Karen Fields and Barbara Fields; second, a 2000 article by Tukufu Zuberi, "Decracializing Social Statistics: Problems in the Quantification of Race." Link to the first, link to the second.
 Full Article

August 5th, 2019

Where is the Artist?

COMPETING VALUES

The state of a new pedagogical field

Technology companies are coming under increased scrutiny for the ethical consequences of their work, and some have formed advisory boards or hired ethicists on staff. (Google's AI ethics board quickly disintegrated.) Another approach is to train computer scientists in ethics before they enter the labor market. But how should that training—which must combine practice and theory across disciplines—be structured, who should teach the courses, and what should they teach?

This month’s cover story of the Communications of the Association for Computing Machinery describes the Embedded EthiCS program at Harvard. (David Gray Grant, a JFI fellow since 2018, and Lily Hu, a new JFI fellow, are co-authors, along with Barbara J. Grosz, Kate Vredenburgh, Jeff Behrends, Alison Simmons, and Jim Waldo.) The article explains the advantages of their approach, wherein philosophy PhD students and postdocs teach modules in computer science classes:

"In contrast to stand-alone computer ethics or computer-and-society courses, Embedded EthiCS employs a distributed pedagogy that makes ethical reasoning an integral component of courses throughout the standard computer science curriculum. It modifies existing courses rather than requiring wholly new courses. Students learn ways to identify ethical implications of technology and to reason clearly about them while they are learning ways to develop and implement algorithms, design interactive systems, and code. Embedded EthiCS thus addresses shortcomings of stand-alone courses. Furthermore, it compensates for the reluctance of STEM faculty to teach ethics on their own by embedding philosophy graduate students and postdoctoral fellows into the teaching of computer science courses."

A future research direction is to examine "the approach's impact over the course of years, for instance, as students complete their degrees and even later in their careers."

Link to the full article.

  • Shannon Vallor and Arvind Narayanan have a free ethics module anyone can use in a CS course. View it here. A Stephanie Wykstra piece in the Nation on the state of DE pedagogy notes that the module has been used at 100+ universities. Link.
  • In February 2018, we wrote about Casey Fiesler’s spreadsheet of tech ethics curricula, which has gotten even more comprehensive, including sample codes of ethics and other resources. Jay Hodges’s comment is still relevant for many of the curricula: "Virtually every discipline that deals with the social world – including, among others, sociology, social work, history, women’s studies, Africana studies, Latino/a studies, urban studies, political science, economics, epidemiology, public policy, and law – addresses questions of fairness and justice in some way. Yet the knowledge accumulated by these fields gets very little attention in these syllabi." Link to that 2018 letter.
  • At MIT, JFI fellow Abby Everett Jacques teaches "Ethics of Technology." An NPR piece gives a sense of the students' experiences. Link.
 Full Article

April 6th, 2019

Exploding Bowl

REMUNERATE EXPANSE

Social reproduction and basic income proposals

The most visible discourse on universal basic income focuses squarely on the labor market. Unconditional cash transfers are understood above all as a potential policy solution to wage stagnation, rising inequality, and labor displacement. This framework, which responds to rising income inequality in general, can be construed as a response to the decline of the family wage.

In a 2017 paper published as part of a forum on UBI in Global Social Policy journal, PATRICIA SCHULZ discusses uncompensated care work and enumerates the ways a basic income could signal a departure from forms of social protection tied to the gendered wage and its analogs in safety net programs:

"In industrialized countries, work organization, labor legislation, and social security systems developed progressively based on the model of the male breadwinner. Therefore, as most social security systems are based on contributions linked to remunerated work, the inferior income of women, their restriction to part-time jobs, as well as the interruptions in their careers due to care responsibilities will directly impact the level of social protection they can expect in case of old age, disability, illness, and so on, as well as expose them to dependency on a partner and/or the welfare state. It remains a huge political challenge to overcome the resistance against delinking social protection and remunerated work, even when the latter tends to become more and more uncertain.

A UBI would be the continuation of previous efforts to ensure that every person has a right to basic economic security, everywhere on the planet, women as well as men."

Link to the report.

  • The 1960s-70s saw a major surge of advocacy and policy thought surrounding access to existing safety net programs, much of which was driven by the National Welfare Rights Organization. Linkto NWRO chairperson Johnnie Tillmon's 1972 manifesto on welfare and women's work, which includes a call for a "guaranteed adequate income," and link to historian Felicia Kornbluh's 2007 book on the movement. Economist Toru Yamamori's research sheds light on feminist movements in the UK and Italy that posed basic income as a solution to discriminatory practices of welfare agencies. Link. (Link also to Frances Fox Piven and Richard Cloward's 1966 article on the gaps in American safety net programs and the possibility of a guaranteed income.)
  • There is much ongoing debate within feminist literature about how a UBI might impact the gender division of labor. Some theorists, including Ingrid Robeyns, caution that compensating unpaid care work risks diminishing the political will of women to advocate for more fundamental changes to their social position. Link. Others maintain that a UBI will incentivize men to play a larger role in social reproduction, thereby leveling power dynamics within heterosexual households.Link, link.
  • For a more thorough argument in favor of basic income, the late feminist economist Aisla McKay has written extensively on the potential impacts of the policy for gender equity and a reconfiguration of citizenship. Link to an article on basic income and social citizenship, and link to her 2005 book The Future of Social Security Policy: Women, Work, and a Citizen’s Basic Income.
 Full Article

January 27th, 2018

A Ha?

DISCONTINUOUS ADVANCE

A flurry of articles in December and January assess the state of artificial intelligence

From Erik Brynjolfsson et al, optimism about productivity growth:

“Economic value lags technological advances.

“To be clear, we are optimistic about the ultimate productivity growth fueled by AI and complementary technologies. The real issue is that it takes time to implement changes in processes, skills and organizational structure to fully harness AI’s potential as a general-purpose technology (GPT). Previous GPTs include the steam engine, electricity, the internal combustion engine and computers.

“In other words, as important as specific applications of AI may be, the broader economic effects of AI, machine learning and associated new technologies stem from their characteristics as GPTs: They are pervasive, improved over time and able to spawn complementary innovations.”

Full post at The Hill here.

 Full Article

December 9th, 2017

Un Coup de dés

THE FUTURE OF UNDERGRADUATE EDUCATION

A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

  • From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."
  • From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
  • From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."
  • On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.
  • Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.
 Full Article

December 2nd, 2017

gesture/data

ARTIFICIAL AGENCY AND EXPLANATION

The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

Full article by CLIFF KUANG here. This page provides a short overview of DARPA's XAI (Explainable Artificial Intelligence) program.

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

  • A Medium post with an illustrated summary of some XAI techniques. Link.
 Full Article