➔ Phenomenal World

October 7th, 2019

➔ Phenomenal World

The Balloon

DIFFERENCE ENGINE

Labor and mechanized calculation

Breathless media coverage of machine learning tools and their applications often obscures the processes that allow them to function. Time and again, services billed or understood by users as automatic are revealed to rely on undervalued, deskilled human labor.

There is rich historical precedent for the presence of these "ghosts in the machine." In a 2017 lecture, Director Emirata of the Max Planck Institute for the History of Science LORRAINE DASTON examines the emergence of mechanical calculation, revealing a fascinating history of the interaction between new technologies and the methods of routinizing and dividing intellectual labor that emerges alongside them.

From the introduction:

"The intertwined histories of the division of labor and mechanical intelligence neither began nor ended with this famous three-act story from pins to computers via logarithms. Long before Prony thought of applying Adam Smith’s political economy to monumental calculation projects, astronomical observatories and nautical almanacs were confronted with mountains of computations that they accomplished by the ingenious organization of work and workers. What mechanization did change was the organization of Big Calculation: integrating humans and machines dictated different algorithms, different skills, different personnel, and above all different divisions of labor. These changes in turn shaped new forms of intelligence at the interface between humans and machines."

Link to the paper version of the lecture. (And stay tuned to the Phenomenal World for our upcoming interview with Daston.)

  • A 1994 paper by Daston entitled "Enlightenment Calculations" gives specific attention to the logarithmic tables of Gaspard de Prony, which sought to demonstrate the usefulness of the newly-invented metric system: "The tables marked an epoch in the history of calculation but also one in the history of intelligence and work." Link.
  • Matthew L. Jones, an historian at Columbia University, studies the history of calculation and computing. His 2016 book Reckoning with Matter: Calculating Machines, Innovation, and Thinking about Thinking from Pascal to Babbage traces the history of attempts to routinize, mechanize and apply the power of calculation. Link to the book, link to Lorraine Daston's review in Critical Inquiry.
  • Simon Schaffer's 1996 paper on the relationship between Charles Babbage's calculating engine and the contemporaneously emerging factory system. Link.
  • A syllabus prepared by Mary L. Gray and Siddharth Suri, authors of Ghost Work—a book about the "hidden" labor force behind many tech services—surveys the tech platform subcontracting labor market. Link.
⤷ Full Article

December 9th, 2017

Un Coup de dés

THE FUTURE OF UNDERGRADUATE EDUCATION

A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

  • From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."
  • From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
  • From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."
  • On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.
  • Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.
⤷ Full Article

December 2nd, 2017

gesture/data

ARTIFICIAL AGENCY AND EXPLANATION

The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

Full article by CLIFF KUANG here. This page provides a short overview of DARPA's XAI (Explainable Artificial Intelligence) program.

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

  • A Medium post with an illustrated summary of some XAI techniques. Link.
⤷ Full Article

November 18th, 2017

Duchamp Wanted

PREDICTIVE JUSTICE

How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Link to Narayanan’s thread.

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

⤷ Full Article

November 11th, 2017

The Hülsenbeck Children

"A DOLL POSSESSED BY A DEMON"

Recommender systems power YouTube's controversial kids' videos

Familiar cartoon characters are placed in bizarre scenarios, sometimes by human content creators, sometimes by automated systems, for the purpose of attracting views and ad money. First, from the New York Times:

“But the app [YouTube Kids] contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

“In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes. Many have taken to Facebook to warn others, and share video screenshots showing moments ranging from a Claymation Spider-Man urinating on Elsa of ‘Frozen’ to Nick Jr. characters in a strip club.”

Full piece by SAPNA MAHESHWARI in the Times here.

On Medium, JAMES BRIDLE expands on the topic, and criticizes the structure of YouTube itself for incentivizing these kinds of videos, many of which have millions of views.

“These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

“While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on.”

Link to Bridle’s piece here.

⤷ Full Article

November 4th, 2017

Untitled

FEED FEEDBACK

Sociologist Zeynep Tufekci engages with Adam Mosseri, who runs the Facebook News Feed

Tufekci: “…Facebook does not ask people what they want, in the moment or any other way. It sets up structures, incentives, metrics & runs with it.”

Mosseri: “We actually ask 10s of thousands of people a day how much they want to see specific stories in the News Feed, in addition to other things.”

Tufekci: “That’s not asking your users, that’s research on your product. Imagine a Facebook whose customers are users—you’d do so much differently. I mean asking all people, in deliberate fashion, with sensible defaults—there are always defaults—even giving them choices they can change…Think of the targeting offered to advertisers—with support to make them more effective—and flip the possibilities, with users as customers. The users are offered very little in comparison. The metrics are mostly momentary and implicit. That’s a recipe to play to impulse.”

The tweets are originally from Zeynep Tufekci in response to Benedict Evans (link), but the conversation is much easier to read in Hamza Shaban’s screenshots here.

See the end of this newsletter for an extended comment from Jay.

  • On looping effects (paywall): “This chapter argues that today's understanding of causal processes in human affairs relies crucially on concepts of ‘human kinds’ which are a product of the modern social sciences, with their concern for classification, quantification, and intervention. Child abuse, homosexuality, teenage pregnancy, and multiple personality are examples of such recently established human kinds. What distinguishes human kinds from ‘natural kinds’, is that they have specific ‘looping effects’. By coming into existence through social scientists' classifications, human kinds change the people thus classified.” Link. ht Jay

THE MESO-LEVEL

Mechanisms and causes between micro and macro

Daniel Little, the philosopher of social science behind Understanding Society, haswritten numerous posts on the topic. Begin with this one from 2014:

“It is fairly well accepted that there are social mechanisms underlying various patterns of the social world — free-rider problems, communications networks, etc. But the examples that come readily to mind are generally specified at the level of individuals. The new institutionalists, for example, describe numerous social mechanisms that explain social outcomes; but these mechanisms typically have to do with the actions that purposive individuals take within a given set of rules and incentives.

“The question here is whether we can also make sense of the notion of a mechanism that takes place at the social level. Are there meso-level social mechanisms? (As always, it is acknowledged that social stuff depends on the actions of the actors.)”

In the post, Little defines a causal mechanism and a meso-level mechanism, then offers example research.

“…It is possible to identify a raft of social explanations in sociology that represent causal assertions of social mechanisms linking one meso-level condition to another. Here are a few examples:

  • Al Young: decreasing social isolation causes rising inter-group hostility (link)
  • Michael Mann: the presence of paramilitary organizations makes fascist mobilization more likely (link)
  • Robert Sampson: features of neighborhoods influence crime rates (link)
  • Chuck Tilly: the availability of trust networks makes political mobilization more likely (link)
  • Robert Brenner: the divided sovereignty system of French feudalism impeded agricultural modernization (link)
  • Charles Perrow: legislative control of regulatory agencies causes poor enforcement performance (link)

More of Little’s posts on the topic are here. ht Steve Randy Waldman

⤷ Full Article

September 30th, 2019

Little Mephisto

VARIED NET

The politics of welfare in the 21st century

In his 1990 book, The Three Worlds of Welfare Capitalism (TWWC), sociologist Gosta Esping-Andersen identified three categories of European welfare regimes: liberal, conservative, and social democratic. In Esping-Andersen's account, these welfare regimes developed according to the sorts of coalitions formed by working people: social democratic regimes are based on associations between agricultural workers and industrial socialist organizations; conservative regimes emerged through an alliance between labor organizations and religious groups; liberal regimes are ones in which strong workers movements never managed to significantly structure bargaining institutions.

The dynamic and historical account of welfare state development which TWWC proposes continues to influence our understanding of how distributional conflicts can shape political institutions. However, Esping-Andersen's categories were based on full employment and high growth—a paradigm that no longer holds. In a lesser known but more recent book, the author attempts to adjust his model to postindustrial labor markets.

From the introduction:

"This book is an attempt to come to grips with the 'new political economy' that is emerging. One premise of my analyses is that 'postindustrial' transformation is institutionally path-dependent. This means that existing institutional arrangements heavily determine national trajectories. More concretely, the divergent kinds of welfare regimes that nations built over the post-war decades have a lasting and overpowering effect on which kind of adaptation strategies can and will be pursued. Hence, we see various kinds of postindustrial societies unfolding before our eyes.

Contemporary debate has been far too focused on the state. The real crisis lies in the interaction between the composite parts that, in unison, form contemporary welfare 'regimes': labour markets, the family, and, as a third partner, the welfare state. What most commentators see as a welfare state crisis, may in reality be a crisis of the broader institutional framework that has come to regulate our political economies. Our common tendency to regard postindustrial society as a largely convergent global process impairs our analytical faculties and our ability to understand the radical shifts in government and power which have taken place in recent decades."

Link to the publisher's page.

  • Esping-Anderson's analysis rests heavily on the Polanyian notions of decommodification and double movement. In a recent book chapter, sociologist Michael Burawoy elaborates on the persisting relevance of these concepts for understanding social movements in market societies. Link.
  • Philip Manow uses the historical framework developed in TWWC to explain the success of communist parties in Southern Europe: "Conflicts between the nation-state and the Catholic church in the mono-denominational countries of Europe’s south rendered a coalition between pious farmers and the anticlerical worker’s movement unthinkable, leading to the further radicalization of the left." Link.
  • "How can the social categories which are commonly called 'middle' class be situated within a conceptual framework built around a polarized concept of class? What does it mean to be in the 'middle' of a 'relation'?" In his 2000 textbook, Class Counts, the late Erik Olin Wright develops a theoretically rich account of class relations and their relevance for understanding historical change. Link.
⤷ Full Article

September 16th, 2019

Live Airborne System

DIVIDED INTEGRATION

Immanuel Wallerstein's contributions to research in the social sciences

Two weeks ago today marked the passing of the great Immanual Wallerstein. His work has had resounding influence across fields: from literature, to legal theory, education, development studies, and international relations. Among his foremost contributions is the four volume Modern World System series, which recount the transformation of feudalism into global capitalism through progressive incorporation of new regions into the European capitalist core. Complementing this history was world-systems theory, an analytical approach which challenged the tendency of social science research to identify simplified and direct causal relationships.

Wallerstein argued that purely economic, historical, or political analyses of society exclude more factors than they incorporate, casting doubt on both their internal and external validity. From the introduction to World Systems Analysis:

The phenomena dealt with in these separate boxes are so closely intermeshed that each presumes the other, each affects the other, each is incomprehensible without taking into account the other boxes. The separate boxes of analysis are an obstacle, not an aid, to understanding the world. Structurally, the social reality within which we live has not been the multiple national states of which we are citizens but something larger, which we call a world-system. This world-system has had many institutions—states and the interstate system, productive firms, households, classes, identity groups of all sorts—which form a matrix which permits the system to operate but at the same time stimulates both the conflicts and the contradictions which permeate it.

The world-system is a social creation, with a history, whose origins need to be explained, whose ongoing mechanisms need to be delineated, and whose inevitable terminal crisis needs to be discerned. For this reason, it is important to look anew not only at how the world in which we live works but also at how we have come to think about this world.

Link to the book's first pages.

  • "My intellectual development led me to historicize social movements, not only to better understand how they came to do the things they did, but also in order to better formulate the political options that were truly available in the present." On his website, Wallerstein reflects on the questions and contradictions that informed his life's work. Link.
  • "The Modern World-System is a theoretically ambitious work that deserves to be critically analyzed as such." Theda Skocpol's sympathetic scrutiny of the weaknesses in Wallerstein's major work, from the 1977 Review of American Sociology. Link.
  • Wallerstein's account of feudal breakdown, which stressed external factors like increased trade, countered that of historians like Robert Brenner, who focused instead on internal factors like peasant revolts. Robert A. Denemark and Kenneth P. Thomas give an overview of the debate. Link.
⤷ Full Article

September 23rd, 2019

The One

FICTIONAL BARGAIN

The meaning and impact of the Coase Theorem

Recent years have seen a surge in scholarship that critically evaluates the origins and impact of the law and economics movement. Out of the many theoretical bedrocks of the movement, the Coase Theorem is one of the most significant. Stemming from Ronald Coase's 1960 paper "The Problem of Social Cost," the theorem itself was coined by George Stigler in his 1966 book The Theory of Price. (Click here for a very simple primer on the theorem as it is generally taught.)

A 1999 paper by STEVEN MEDEMA—an economist and historian who has written extensively on Coase and his legacy—looks at the role that the Coase theorem has played in the law and economics movement:

In spite of the often heavily ideological overtones of the Coase theorem debate, the theorem is simply a positive proposition, stating that under certain conditions a particular result will follow. Yet, the Coase theorem has been assailed from the left (as conservative dogma) and from the right (as liberal dogma); its moral, philosophical, and political underpinnings have been called into question; its logic, applicability, and empirical content have been both trashed and defended; it has been hailed as offering a new way to conceptualize law and legal culture and attacked as anathema to the traditional common law process. The present essay will attempt to explain how and why the Coase theorem quickly evolved from a debunking fiction to the basis of one of the most successful branches of applied economics in the last part of this century.

Link to the paper.

  • Ronald Coase's seminal paper "The Problem of Social Cost." Link. And an interview with Coase from 1997, in which he says: "I think the success of the Coase Theorem—because it’s discussed all over the place—is an interesting illustration of what’s wrong with economics. If you read 'The Problem of Social Cost,' it occupies perhaps four pages. It’s useful because you can show the type of contracts that would have to be made in order to have an efficient economic system. But then you have to introduce the obstacles to doing it. Then you see how the system actually works." Link.
  • A 1998 paper by Deirdre McCloskey examines the legacy of the theorem: "Something like a dozen people in the world understand that the 'Coase' theorem is not the Coase theorem. One of this select group is Ronald Coase himself, so I suspect we blessed few are right." Link.
  • A post by Steven Medema on VoxEU treats Coase's legacy and the distance between his own views and the school of thought which adopted the theorem bearing his name. Link. Another paper by Medema examines the Coase Theorem on its sixtieth anniversary. Link.
  • Robin Hahnel and Kristen Sheeran provide an "internal critique" of the theorem, arguing that even under optimal conditions—low transaction costs and well-defined property rights—it generates perverse incentives. Link.
  • A 2017 paper by Dina Waked—"Sense and Nonsense of the Economic Analysis of Tort Law"—situates the law and economics school (and its invocation of Coase) alongside earlier and more recent alternatives, including the early institutionalists. Link.
⤷ Full Article

September 9th, 2019

Original & Forgery

MULTIPLY EFFECT

The difficulties of causal reasoning and race

While the thorny ethical questions dogging the development and implementation of algorithmic decision systems touch on all manner of social phenomena, arguably the most widely discussed is that of racial discrimination. The watershed moment for the algorithmic ethics conversation was ProPublica's 2016 article on the COMPAS risk-scoring algorithm, and a huge number of ensuing papers in computer science, law, and related disciplines attempt to grapple with the question of algorithmic fairness by thinking through the role of race and discrimination in decision systems.

In a paper from earlier this year, ISSA KOHLER-HAUSMAN of Yale Law School examines the way that race and racial discrimination are conceived of in law and the social sciences. Challenging the premises of an array of research across disciplines, Kolher-Hausmann argues for both a reassessment of the basis of reasoning about discrimination, and a new approach grounded in a social constructivist view of race.

From the paper:

"This Article argues that animating the most common approaches to detecting discrimination in both law and social science is a model of discrimination that is, well, wrong. I term this model the 'counterfactual causal model' of race discrimination. Discrimination, on this account, is detected by measuring the 'treatment effect of race,' where treatment is conceptualized as manipulating the raced status of otherwise identical units (e.g., a person, a neighborhood, a school). Discrimination is present when an adverse outcome occurs in the world in which a unit is 'treated' by being raced—for example, black—and not in the world in which the otherwise identical unit is 'treated' by being, for example, raced white. The counterfactual model has the allure of precision and the security of seemingly obvious divisions or natural facts.

Currently, many courts, experts, and commentators approach detecting discrimination as an exercise measuring the counterfactual causal effect of race-qua-treatment, looking for complex methods to strip away confounding variables to get at a solid state of race and race alone. But what we are arguing about when we argue about whether or not statistical evidence provides proof of discrimination is precisely what we mean by the concept DISCRIMINATION."

Link to the article. And stay tuned for a forthcoming post on the Phenomenal World by JFI fellow Lily Hu that grapples with these themes.

  • For an example of the logic Kohler-Hausmann is writing against, see Edmund S. Phelps' 1972 paper "The Statistical Theory of Racism and Sexism." Link.
  • A recent paper deals with the issue of causal reasoning in an epidemiological study: "If causation must be defined by intervention, and interventions on race and the whole of SeS are vague or impractical, how is one to frame discussions of causation as they relate to this and other vital issues?" Link.
  • From Kohler-Hausmann's footnotes, two excellent works informing her approach: first, the canonical book Racecraft by Karen Fields and Barbara Fields; second, a 2000 article by Tukufu Zuberi, "Decracializing Social Statistics: Problems in the Quantification of Race." Link to the first, link to the second.
⤷ Full Article