# ↳ Machine+learning

## The Politics of Machine Learning, pt. II

The uses of algorithms discussed in the first part of this article vary widely: from hiring decisions to bail assignment, to political campaigns and military intelligence.

Across all these applications of machine learning methods, there is a common thread: Data on individuals is used to treat different individuals differently. In the past, broadly speaking, such commercial and government activities used to target everyone in a given population more or less similarly—the same advertisements, the same prices, the same political slogans. More and more now, everyone gets personalized advertisements, personalized prices, and personalized political messages. New inequalities are created and new fragmentations of discourse are introduced.

Is that a problem? Well, it depends. I will discuss two types of concerns. The first type, relevant in particular to news and political messaging, is that the differentiation of messages is by itself a source of problems.

## The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

## GAP PROGRESSION

### New life in the debates over poverty measurement

In recent weeks, a familiar debate over how we understand the global poverty rate across time reappeared in mainstream op-ed pages. Sparked initially by Bill Gates tweeting out an infographic produced by Our World in Data—which visualizes massive decreases (94% to 10% of people) in global poverty over the past two-hundred years—the notable discussants have been LSE anthropologist JASON HICKEL and Our World in Data researchers JOE HASELL and MAX ROSER.

Hickel published a polemical Guardian op-ed criticizing the publication of this chart, which, he argued, misrepresents the history it claims to communicate and relies on contestable and imprecise data sources to bolster its universal progress narrative, taking "the violence of colonisation and repackaging it as a happy story of progress." Theresponses were numerous.

Among them, a post by Hasell and Roser provided detailed descriptions of the methods and data behind their work to answer the following: "How do we do know that the vast majority of the world population lived in extreme poverty just two centuries ago as this chart indicates? And how do we know that this account of falling global extreme poverty is in fact true?"

In addition to methodological arguments regarding data sources and the poverty line, Hickel's argument emphasizes the gap between poverty and the capacity to eliminate it:

"What matters, rather, is the extent of global poverty vis-à-vis our capacity to end it. As I have pointed out before, our capacity to end poverty (e.g., the cost of ending poverty as a proportion of the income of the non-poor) has increased many times faster than the proportional poverty rate has decreased. By this metric we are doing worse than ever before. Indeed, our civilization is regressing. On our existing trajectory, according to research published in the World Economic Review, it will take more than 100 years to end poverty at $1.90/day, and over 200 years to end it at$7.4/day. Let that sink in. And to get there with the existing system—in other words, without a fairer distribution of income—we will have to grow the global economy to 175 times its present size. Even if such an outlandish feat were possible, it would drive climate change and ecological breakdown to the point of undermining any gains against poverty.

It doesn’t have to be this way, of course."

Link to that post, and link to a subsequent one, which responds directly to the methods and data-use questions addressed by Hasell and Roser.

## ARTIFICIAL INFERENCE

### Causal reasoning and machine learning

In a recent paper titled "The Seven Pillars of Causal Reasoning with Reflections on Machine Learning", JUDEA PEARL, professor of computer science at UCLA and author of Causality popup: yes, writes:

“Current machine learning systems operate, almost exclusively, in a statistical or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling."

The tasks include work on counterfactuals, and new approaches to handling incomplete data. Link popup: yes to the paper. A vivid expression of the issue: "Unlike the rules of geometry, mechanics, optics or probabilities, the rules of cause and effect have been denied the benefits of mathematical analysis. To appreciate the extent of this denial, readers would be stunned to know that only a few decades ago scientists were unable to write down a mathematical equation for the obvious fact that 'mud does not cause rain.' Even today, only the top echelon of the scientific community can write such an equation and formally distinguish 'mud causes rain' from 'rain causes mud.'”

Pearl also has a new book out, co-authored by DANA MCKENZIE, in which he argues for the importance of determining cause and effect in the machine learning context. From an interview in Quanta magazine about his work and the new book:

"As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. If we want machines to reason about interventions ('What if we ban cigarettes?') and introspection ('What if I had finished high school?'), we must invoke causal models. Associations are not enough—and this is a mathematical fact, not opinion.

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans."

Link popup: yes to the interview. (And link popup: yes to the book page.)

## ONTARIO FOR ALL

### Canada calculates expanding Ontario's guaranteed income to the entire nation

Canada’s Parliamentary Budget Office looks at the cost of expanding the Ontario pilot nationwide. Full report here popup: yes. ht Lauren

ANDREW COYNE of the NATIONAL POST summarizes the findings (all figures are in Canadian dollars):

“The results, speculative as they are, are intriguing. The PBO puts the cost of a nationwide rollout of the Ontario program, guaranteeing every adult of working age a minimum of 16,989 CAD annually (24,027 CAD for couples), less 50 per cent of earned income—there’d also be a supplement of up to 6,000 CAD for those with a disability—at 76.0 billion CAD.
“Even that number, eye-watering as it is (the entire federal budget, for reference, is 312 billion CAD), is a long way from the 500 billion CAD estimates bandied about in some quarters.
“But that’s just the gross figure. The PBO estimates the cost of current federal support programs for people on low-income (not counting children and the elderly, who already have their own guaranteed income programs) at \$33 billion annually. Assuming a federal basic income replaced these leaves a net cost of 43 billion CAD. That’s still a lot—one seventh of current federal spending.”