October 17th, 2019

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

 Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

 Full Article

September 12th, 2019

Money Parables

Three competing theories of money

In the past year, Modern Monetary Theory (MMT) has shifted the policy debate in a way that few heterodox schools of economic thought have in recent memory. MMT’s central notion—that nations with their own strong currencies face no inherent financial constraints—has made its way into politics and, notably, the world of finance. The last few months have brought MMT explainers from financial media outlets including Reuters, CNBC, Bloomberg, Barron’s, and Business Insider, as well as from investment analysts at Wall Street firms including Goldman Sachs, Bank of America, Fitch, Standard Chartered and Citigroup.

Popularizing the shorthand notion that “deficits don’t matter” has been an achievement for those promulgating MMT. Yet one largely unappreciated change brought about by the MMT debates involves a somewhat subtler point: a shift in the implicit story we tell about money.

The rise of MMT poses a challenge to the mainstream commodity money story. That parable, familiar to anyone who has taken high school economics or read Adam Smith, involves an inefficient barter system that gives way to the more convenient use of some token that represents value, typically a precious metal. If government plays a role in this story, it is only to regulate money after the marketplace births it.

The MMT parable—known in the literature as chartalism—reverses the commodity money view. For chartalists, money arises through an act of law, namely the levying of a tax which requires citizens to go out and get that which pays taxes; the state comes first and markets are subsequent. As Abba Lerner puts it, money is “a creature of the state.”

 Full Article

August 23rd, 2019

Is it impossible to be fair?

Statistical prediction is increasingly pervasive in our lives. Can it be fair?

The Allegheny Family Screening Tool is a computer program that predicts whether a child will later have to be placed into foster care. It's been used in Allegheny County, Pennsylvania, since August 2016. When a child is referred to the county as at risk of abuse or neglect, the program analyzes administrative records and then outputs a score from 1 to 20, where a higher score represents a higher risk that the child will later have to be placed into foster care. Child welfare workers use the score to help them decide whether to investigate a case further.

Travel search engines like Kayak or Google Flights predict whether a flight will go up or down in price. Farecast, which launched in 2004 and was acquired by Microsoft a few years later, was the first to offer such a service. When you look up a flight, these search engines analyze price records and then predict whether the flight's price will go up or down over some time interval, perhaps along with a measure of confidence in the prediction. People use the predictions to help them decide when to buy a ticket.

 Full Article

August 1st, 2019

Decentralize What?

Can you fix political problems with new web infrastructures?

The internet's early proliferation was steeped in cyber-utopian ideals. The circumvention of censorship and gatekeeping, digital public squares, direct democracy, revitalized civic engagement, the “global village”—these were all anticipated characteristics of the internet age, premised on the notion that digital communication would provide the necessary conditions for the world to change. In a dramatic reversal, we now associate the internet era with eroding privacy, widespread surveillance, state censorship, asymmetries of influence, and monopolies of attention—exacerbations of the exact problems it portended to fix.

Such problems are frequently understood as being problems of centralization—both infrastructural and political. If mass surveillance and censorship are problems of combined infrastructural and political centralization, then decentralization looks like a natural remedy. In the context of the internet, decentralization generally refers to peer-to-peer (p2p) technologies. In this post, I consider whether infrastructural decentralization is an effective way to counter existing regimes of political centralization. The cyber-utopian dream failed to account for the exogenous pressures that would shape the internet—the rosy narrative of infrastructural decentralization seems to be making a similar misstep.

 Full Article

July 18th, 2019

Student Debt & Racial Wealth Inequality

How student debt cancellation affects the racial wealth gap

The effect of cancelling student debt on various measures of individual and group-level inequality has been a matter of controversy, especially given presidential candidates’ recent and high-profile proposals to eliminate outstanding student debt. In this work, I attempt to shed light on the policy counterfactual by analyzing the Survey of Consumer Finances for 2016, the most recent nationally-representative dataset that gives a picture of the demographics of student debt.

When we test the effects of cancelling student debt on the racial wealth gap, we conclude that across all samples, across all quantiles, the racial wealth gap narrows when student debt is cancelled, and it narrows more the more student debt is cancelled.

With respect to the two presidential candidates’ plans, this means that the Sanders plan, completely eliminating outstanding student debt, reduces racial wealth inequality more than does the Warren plan, which only forgives $50,000 of debt, and phases that out for high earners. But the difference between the two plans as measured by the reduction in the racial wealth gap is not large. It would be fair to say that the Warren plan achieves the vast majority of the racial wealth equity gains that the Sanders plan achieves, while leaving the student debt held by the highest-income borrowers intact.

 Full Article

July 3rd, 2019

The Politics of Machine Learning, pt. II

The uses of algorithms discussed in the first part of this article vary widely: from hiring decisions to bail assignment, to political campaigns and military intelligence.

Across all these applications of machine learning methods, there is a common thread: Data on individuals is used to treat different individuals differently. In the past, broadly speaking, such commercial and government activities used to target everyone in a given population more or less similarly—the same advertisements, the same prices, the same political slogans. More and more now, everyone gets personalized advertisements, personalized prices, and personalized political messages. New inequalities are created and new fragmentations of discourse are introduced.

Is that a problem? Well, it depends. I will discuss two types of concerns. The first type, relevant in particular to news and political messaging, is that the differentiation of messages is by itself a source of problems.

 Full Article

June 27th, 2019

The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

 Full Article

May 31st, 2019

Copyright Humanism

It's by now common wisdom that American copyright law is burdensome, excessive, and failing to promote the ideals that protection ought to. Too many things, critics argue, are subject to copyright protections, and the result is an inefficient legal morass that serves few benefits to society and has failed to keep up with the radical transformations in technology and culture of the last several decades. To reform and streamline our copyright system, the thinking goes, we need to get rid of our free-for-all regime of copyrightability and institute reasonable barriers to protection.

But what if these commentators are missing the forest for the trees, and America's frequently derided copyright regime is actually particularly well-suited to the digital age? Could copyright protections—applied universally at the moment of authorship—provide a level of autonomy that matches the democratization of authorship augured by the digital age?

 Full Article

March 28th, 2019

Experiments for Policy Choice

Randomized experiments have become part of the standard toolkit for policy evaluation, and are usually designed to give precise estimates of causal effects. But, in practice, their actual goal is to pick good policies. These two goals are not the same.

Is this the best way to go about things? Can we maybe make better policy choices, with smaller experimental budgets, by doing things a little differently? This is the question that Anja Sautmann and I address in our new work on “Adaptive experiments for policy choice.” If we wish to pick good policies, we should run experiments adaptively, shifting toward better policies over time. This gives us the highest chance to pick the best policy after the experiment has concluded.

 Full Article