Un Coup de dés

THE FUTURE OF UNDERGRADUATE EDUCATION

A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with “practical and actionable recommendations to improve the undergraduate experience”:

“Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve.”

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report’s recommendations that stand out:

  • From page 40: “Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field.”
  • From page 65: “Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
  • From page 72: “Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income.”
  • On a related note, see this 2016 paper from the Miller Center at the University of Virginia: “Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems.” Link.
  • Meanwhile, Congress is working on the reauthorization of the Higher Education Act: “Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry.” Link.

EXPLANATION IS HARD, EXPLANATION IS SUBTLE

Chasing Moore’s Law: Director of Research Jay Hodges gives extended comment on the burgeoning field of XAI

The rapid development of machine learning and AI raises a sci-fi specter going back at least to Frankenstein: well-intentioned geniuses use technology to create something they can neither understand nor control, disaster ensues. So it’s not surprising that there’s an emerging sense of urgency around making AI understandable. If we can’t explain how the systems work, the thought goes, we can’t know that we’re not designing them badly, that we’re not leading them astray through hidden assumptions in our data, that we’re not training them to reflect our prejudices. And, to be sure, there are great examples of both practical and moral failures of AI systems in Aaron Borstein’s Nautilus article.

I think, however, that there are reasons for skepticism around the explainable AI (“XAI”) project as it stands. In particular, explanation is an astonishingly difficult concept to define. You could fill a book with attempts to define it and their refutations (people have: Wesley Salmon’s Four Decades of Scientific Explanation is a classic). It’s extraordinarily unlikely that a definition that can be operationalized algorithmically will be discovered any time soon. But this pessimism can be tempered with a higher-order skepticism: explanation is almost certainly a red herring, and not what we need to allay our fears.

First: explanation is hard. Although the notion of explanation, as an object of study, is relatively new to computer science and technology studies, it’s old hat to analytic philosophy (“analytic” philosophers are the folks who brought us formal logic, set theory, and computability theory, as opposed to “continental” philosophers like Derrida and Žižek). There’s a literature in logic, philosophy of science, philosophy of mathematics, ethics, and philosophy of action spanning more than 75 years that treats explanation as its primary focus (for some overviews see the Stanford Encylcopedia of Philosophy’s articles on scientific explanation, reasons for action, and explanation in mathematics). Perhaps the most salient lesson of this literature is that the obvious definitions just don’t work. They all meet with recalcitrant counterexamples.

Writers in XAI unfamiliar with this literature are now recreating the earliest attempts at definitions. So the Bonsai piece seems to think of explanation as a matter of “discriminat[ing] between [a] particular classification and any other classification that could have been derived” — what philosophers call counterfactual dependence. The Harvard interdisciplinary group, meanwhile, endorses both the counterfactual dependence notion (“Would changing a certain factor have changed the decision?”) and a causal notion (“what factor caused a difference in outcomes?”).

But neither of these characterizations correctly captures the notion of explanation. Take the causal notion: to explain an event is to describe its causes. There are all sorts of explanations that don’t meet this definition. Think of someone defending against libel by saying: “What I said isn’t libel, because it’s true, and libelous claims are false by definition.” The truth of the claim, together with the definition of ‘libel,’ explains why the claim isn’t libel, but neither causes anything. The relationship here is constitutive, rather than causal. Likewise, there are explanations of mathematical theorems, explanations from equilibrium states, and statistical explanations that are non-causal.

Or take counterfactual dependence: the event being explained wouldn’t have happened without the explaining factor. This is both too narrow and too broad to work as a definition. Too narrow, since it misses out on the above explanations; it’s simply not true that a non-libelous statement would have been libelous had our definition of ‘libel’ been different, any more than it’s true that horses wouldn’t have had tails if our definition of ‘tail’ had been different (as Steve Yablo puts it, “how people talk doesn’t change the anatomy of horses”). Too broad, since it fails to rule out bad explanations that cite minor or irrelevant factors: say I’m driving drunk and crash; although it’s true that I wouldn’t have crashed if I’d lost my keys, failing to lose my keys doesn’t explain the crash. My drunkenness does!

My point is not to trash these authors; these sorts of well-motivated (but decidedly bad) definitions of explanation are exactly what we’d expect smart people to come up with when they’re just starting off on the project of describing explanation formally. And of course, we can try all sorts of moves, by modifying the definitions slightly, or narrowing their scope, or distinguishing between different kinds of explanation; in fact, this is exactly the game that plays out in the literature. Instead, the point is that, at least to the extent that successful development of XAI depends upon a coherent picture of explanation, those working in the field would do better by learning from the work that’s gone before, rather than trying to re-discover everything from scratch. It’s late in the game, and there are substantially more sophisticated and defensible pictures of explanation around these days (Michael Strevens, Elliott Sober, Philip Kitcher, Bas van Fraassen, Brad Skow, and Robert Batterman are some of the key figures).

Unfortunately, none of the current contenders for a good theory of explanation provides a straightforward way of producing or testing explanations in the XAI context. Fortunately, I don’t think XAI needs to implement a correct theory of explanation, because I don’t think explanation is what AI needs in order to address the moral and practical concerns that motivate the discussion. Saying why will require a bit more on explanations.

Explanations happen on multiple levels. There can be, and there almost always are, multiple concurrent good explanations for the same event. Why did Sally kick Sam? Think of the directions we can go with an explanation. Because he insulted her. Because she was angry at him. Because she wanted to cause Sam pain and predicted that kicking him would effectively do so. Because certain regions of her brain activated in certain ways, signaling her leg to move, causing her foot to impact Sam. Because her neurons fired in a particular way, causing her muscle fibers to contract. Because a vast, complex system of particles underwent a cascade of state changes that eventually ended in one of unfathomably many possible realizations of a Sally-kicks-Sam state.

We can explain the event at a micro-level, involving fundamental particles, we can explain it at the middle level of her cellular and intercellular processes, we can explain it in terms of her brain state, we can explain it in terms of her cognitive state, we can explain it in terms of her motivating and normative reasons. All these, and many others, are possible and (if, for some, only in extreme contexts) potentially appropriate responses to the why-question. The correctness of one doesn’t preclude the correctness of any of the others.

This seems to me a direct analogue of the question that XAI is trying to address. It’s not that we can’t explain how neural networks reach their conclusions. Of course we can – and on multiple levels, at that! We can explain at the level of hardware transitions (including everything from quantum interactions to high-level circuit logic behavior), at the level of the underlying code, at the level of the particular node transformations and value-passing, we can explain at the level of the abstract structures implemented by a neural network design, and more.

So without more clarity about what we’re seeking from explanations, it’s not at all obvious what level we’re hoping XAI explanations will sit at. It’s also not obvious what sorts of analogies are appropriate to the sort of explanations appropriate for human actions. The closest (in terms both of the comfort we’re seeking from such explanations and of the language in which writers seem to discuss them) would seem to be the high-level reasons for action. But there’s a pretty big problem with that: an algorithm doesn’t have reasons in the sense that Sally’s anger is a reason for her kicking Sam. At least in AI’s extant forms, it’s pure metaphor to think about such a system’s beliefs and desires, much less its commitments, motivations, and justifications.

Is there a middle-ground alternative that is appropriate to algorithms? The notion of interpretability found in some XAI writing points to one: we want to be able to articulate in clear, comprehensible terms what features are being used in what ways by the algorithm. One way to cash this out is in terms of rational reconstruction – trying to imagine a fictional rational agent making the decision, and tracking the steps and factors used by that agent as rough parallels to those of the algorithm. This is akin to the reasonable person test used in some legal contexts: rather than try to “get inside the head” of a subject, we simply imagine whether a reasonable person could or would have done the same in the same context.

Yet it’s still not clear how much we can demand along these lines. Many authors (including the Harvard group) demand that the process be determinate and consistent across contexts. Assume that it is. If an algorithm lends itself to rational reconstruction, it seems that the features to which the algorithm responds must be to some extent recognizable to humans, and the decision process by which the algorithm uses them to classify must be to some extent expressible to humans. But this presents a dichotomy. If the features perfectly reflect human-recognizable features, and the process is easily describable, this makes the algorithm look very like a decision tree or similar. For most of the tasks in question, these perform poorly. On the other hand, if the interpretations are themselves only vague or rough analogies, then it’s not clear how reliable such explanations are: our rational reconstructions seem more creation than recreation. There’s a trade-off between how well the algorithms work, and how well they can be explained. The real question, it seems, is how to balance this trade-off.

  • A critique of the “right to explanation” in Europe’s General Data Protection Regulation: “As the history of industries like finance and credit shows, rights to transparency do not necessarily secure substantive justice or effective remedies. We are in danger of creating a ‘meaningless transparency’ paradigm to match the already well-known ‘meaningless consent’ trope.” Link.
  • “When Malioutov presented his accurate but inscrutable neural network model to his own corporate client, he also offered them an alternative, rule-based model whose workings he could communicate in simple terms. This second, interpretable, model was less accurate than the first, but the client decided to use it anyway—despite being a mathematically sophisticated insurance company for which every percentage point of accuracy mattered. ‘They could relate to [it] more,’ Malioutov says. ‘They really value intuition highly.’” Link. ht Sara

+++

  • From the introduction to The Failed Welfare Revolution: America’s Struggle over Guaranteed Income Policy: “…The main obstacle to [guaranteed annual income] legislation was the cultural distinction that Americans draw between different categories of poor people. Put most simply, Americans have long considered some types of people, based on their perceived adherence to the work ethic, to be more worthy of government assistance than others.” More about the book here. ht Will
  • In a new report, McKinsey looks at the impact of automation on jobs. Link. Marc comments: McKinsey works, in this report, from the same set of (tenuous) assumptions as in their earlier. However, they do start to account for automatable hours (rather than jobs), which is why they’re getting less extreme results.
  • Andrew Gelman on 80% power: “But what I’m saying right here is that, even knowing nothing about any replication crisis, without any real-world experience or cynicism or sociology or documentation or whatever you want to call it . . . it just comes down to the math. With 80% power, we’d expect to see tons and tons of p-values like 0.0005, 0.0001, 0.00005, etc. This would just be happening all the time. But it doesn’t.” Link. ht Sidhya
  • A study on “twiplomacy” (Twitter diplomacy). Link. ht Sidhya
  • The ultimate DIY: a DIY pancreas. “To make and distribute it would violate federal regulations, and to become a company would mean dealing with those regulations. But there is no rule about launching a blueprint on the Internet. ‘So that’s what we did,’ Lewis said, ‘and that’s why we called it Open APS, which stands for open-sourced pancreas system.’” Link.
  • Oliver Morton tweets in response to a alarm about solar geoengineering: “…Geoengineering is not yet a technology; it is a poorly defined conceptual space where a technology might be—a ‘technological imaginary,’ as social scientists sometimes say. Definitive statements about it are thus unwarranted. It can’t be averred to be unproblematically good, or useful. It could definitely be developed in a dangerous way. But there is no ‘it’ that can be essentially dangerous.” Link.
  • Upcoming proposals for carbon taxes/cap and trade. Link.
  • The environmental cost of the bitcoin surge: “Today, each bitcoin transaction requires the same amount of energy used to power nine homes in the U.S. for one day.” Link.
  • An analysis of 21 studies on unconditional cash transfers, with a focus on health. Link.
  • “Unfortunately, one of the things the [Library of Congress] learned was that the Twitter data overwhelmed the technical resources and capacities of the institution. By 2013, the library had to admit that a single search of just the Twitter data from 2006 to 2010 could take 24 hours. Four years later, the archive still is not available to researchers. Across the board, the reality began to sink in that these proprietary services hold volumes of data that no public institution can process. And that’s just the data itself.” Link.
  • Thinking through types of media coverage of fake news, on the Committee for the Anthropology of Science, Technology & Computing blog. Link.
  • Open borders in the news: “Kenyan President Uhuru Kenyatta announced during his inauguration last week that the East African commercial hub will now give visas on arrival to all Africans. That follows similar measures by nations including Benin and Rwanda. ‘The freer we are to travel and live with one another, the more integrated and appreciative of our diversity we will become,’ Kenyatta said.” Link.
  • For more on open borders, an Economist overview from the summer is available here.
  • “If the connection sounds fantastical it’s because it is, even to mathematicians. And for that reason, Kim long kept it to himself. ‘I was hiding it because for many years I was somewhat embarrassed by the physics connection,’ he said. ‘Number theorists are a pretty tough-minded group of people, and influences from physics sometimes make them more skeptical of the mathematics.’” Link. ht Margarita
  • From a paper titled “Parochial trust and cooperation across 17 societies”: “In a time of increasing parochialism in both domestic and international relations, our findings affirm us of the danger of the strong human universal toward parochial altruism. Yet, our findings suggest that in all societies, there exist people whose cooperation transcends group boundaries and provides a solution to combating parochialism: reputation-based indirect reciprocity.” Link.

Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: editorial@jainfamilyinstitute.org

Subscribe to Phenomenal World Sources, a weekly digest of recommended readings across the social sciences. See the full Sources archive.