The Phenomenal World

January 6th, 2018

The Phenomenal World

Anonymous Power Game



Nominations from top economists, including selections by Raj Chetty, Sendhil Mullainathan, and Angus Deaton

One favorite from this excellent round-up is by Hulten and Nakamura on metrics, selected by Diane Coyle (we previously sent her Indigo Prize paper):

Accounting for Growth in the Age of the Internet: The Importance of Output-Saving Technical Change by Charles Hulten and Leonard Nakamura

Main finding: Living standards may be growing faster than GDP growth.
Nominating economist: Diane Coyle, University of Manchester
Specialization: Economic statistics and the digital economy
Why?: “This paper tries to formalize the intuition that there is a growing gap between the standard measure of GDP, capturing economic activity, and true economic welfare and to draw out some of the implications.”

Robert Allen's "Absolute Poverty: When Necessity Displaces Desire" is another metrics-related piece on the list.

Also noteworthy, on the future of work:

Valuing Alternative Work Arrangements by Alexandre Mas and Amanda Pallais

Main finding: The average worker does not value an Uber-like ability to set their own schedule.
Nominating economist: Emily Oster, Brown University
Specialization: Health economics and research methodology
Why? “This paper looks at a question increasingly important in the current labor market: How do people value flexible work arrangements? The authors have an incredibly neat approach, using actual worker hiring to generate causal estimates of how workers value various employment setups.”

Full piece by DAN KOPF here.

⤷ Full Article

December 23rd, 2017

The Year in Review

INCOME SHARE AGREEMENTS Purdue, BFF, the national conversation “Long discussed in college policy and financing circles, income share agreements, or ISAs, are poised to become more mainstream.” That's from a September Wall Street Journal article. 2017 saw new pilots and the introduction of legislation in Congress, as well as the continued growth of Purdue’s Back a Boiler program, which was covered in PBS Newshour with JFI staff featured. Better Future Forward (incubated by JFI) was founded to originate ISAs and structure ISA pilots. Launched in February 2017 with support from the Arnold Foundation and the Lumina Foundation, BFF has formed partnerships with (and funded students through) Opportunity@Work, College Possible, and the Thurgood Marshall College Fund. Various research organizations are tracking ISAs closely: From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience": “What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve." Link to the full report. See page 72 for mention of ISAs. From the Miller Center at the University of Virginia (2016): "ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preference...
⤷ Full Article

December 16th, 2017

Bruised Grid



Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

The Social Capital newsletter responds:

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

⤷ Full Article

December 9th, 2017

Un Coup de dés



A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

  • From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."
  • From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
  • From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."
  • On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.
  • Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.
⤷ Full Article

December 2nd, 2017




The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

Full article by CLIFF KUANG here. This page provides a short overview of DARPA's XAI (Explainable Artificial Intelligence) program.

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

  • A Medium post with an illustrated summary of some XAI techniques. Link.
⤷ Full Article

November 18th, 2017

Duchamp Wanted



How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Link to Narayanan’s thread.

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

⤷ Full Article

November 11th, 2017

The Hülsenbeck Children



Recommender systems power YouTube's controversial kids' videos

Familiar cartoon characters are placed in bizarre scenarios, sometimes by human content creators, sometimes by automated systems, for the purpose of attracting views and ad money. First, from the New York Times:

“But the app [YouTube Kids] contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

“In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes. Many have taken to Facebook to warn others, and share video screenshots showing moments ranging from a Claymation Spider-Man urinating on Elsa of ‘Frozen’ to Nick Jr. characters in a strip club.”

Full piece by SAPNA MAHESHWARI in the Times here.

On Medium, JAMES BRIDLE expands on the topic, and criticizes the structure of YouTube itself for incentivizing these kinds of videos, many of which have millions of views.

“These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

“While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on.”

Link to Bridle’s piece here.

⤷ Full Article

November 4th, 2017




Sociologist Zeynep Tufekci engages with Adam Mosseri, who runs the Facebook News Feed

Tufekci: “…Facebook does not ask people what they want, in the moment or any other way. It sets up structures, incentives, metrics & runs with it.”

Mosseri: “We actually ask 10s of thousands of people a day how much they want to see specific stories in the News Feed, in addition to other things.”

Tufekci: “That’s not asking your users, that’s research on your product. Imagine a Facebook whose customers are users—you’d do so much differently. I mean asking all people, in deliberate fashion, with sensible defaults—there are always defaults—even giving them choices they can change…Think of the targeting offered to advertisers—with support to make them more effective—and flip the possibilities, with users as customers. The users are offered very little in comparison. The metrics are mostly momentary and implicit. That’s a recipe to play to impulse.”

The tweets are originally from Zeynep Tufekci in response to Benedict Evans (link), but the conversation is much easier to read in Hamza Shaban’s screenshots here.

See the end of this newsletter for an extended comment from Jay.

  • On looping effects (paywall): “This chapter argues that today's understanding of causal processes in human affairs relies crucially on concepts of ‘human kinds’ which are a product of the modern social sciences, with their concern for classification, quantification, and intervention. Child abuse, homosexuality, teenage pregnancy, and multiple personality are examples of such recently established human kinds. What distinguishes human kinds from ‘natural kinds’, is that they have specific ‘looping effects’. By coming into existence through social scientists' classifications, human kinds change the people thus classified.” Link. ht Jay


Mechanisms and causes between micro and macro

Daniel Little, the philosopher of social science behind Understanding Society, haswritten numerous posts on the topic. Begin with this one from 2014:

“It is fairly well accepted that there are social mechanisms underlying various patterns of the social world — free-rider problems, communications networks, etc. But the examples that come readily to mind are generally specified at the level of individuals. The new institutionalists, for example, describe numerous social mechanisms that explain social outcomes; but these mechanisms typically have to do with the actions that purposive individuals take within a given set of rules and incentives.

“The question here is whether we can also make sense of the notion of a mechanism that takes place at the social level. Are there meso-level social mechanisms? (As always, it is acknowledged that social stuff depends on the actions of the actors.)”

In the post, Little defines a causal mechanism and a meso-level mechanism, then offers example research.

“…It is possible to identify a raft of social explanations in sociology that represent causal assertions of social mechanisms linking one meso-level condition to another. Here are a few examples:

  • Al Young: decreasing social isolation causes rising inter-group hostility (link)
  • Michael Mann: the presence of paramilitary organizations makes fascist mobilization more likely (link)
  • Robert Sampson: features of neighborhoods influence crime rates (link)
  • Chuck Tilly: the availability of trust networks makes political mobilization more likely (link)
  • Robert Brenner: the divided sovereignty system of French feudalism impeded agricultural modernization (link)
  • Charles Perrow: legislative control of regulatory agencies causes poor enforcement performance (link)

More of Little’s posts on the topic are here. ht Steve Randy Waldman

⤷ Full Article