↳ Race

May 22nd, 2020

↳ Race

Municipal Bonds, Race, and the American City

An interview with Destin Jenkins

The rapid and expansive action taken by the Fed over the past two months in response to the coronavirus crisis has muddied the distinction between monetary and fiscal policy. In particular, its Municipal Liquidity Facility provides a path for financing emergency spending by local governments. In some optimistic accounts, MLF-backed investment has the capacity to dramatically reduce the geographical, income, and racial inequalities which have increased in recent decades. But in order to do this, the MLF must explicitly prioritize investment in these communities.

In a recent article for the Washington Post, UChicago Professor Destin Jenkins argued the historical case. In the aftermath of WWII, a municipal bond market that valued white, middle-class consumption diverted investment outside of cities and into the suburbs. Federal housing officials, mortgage bankers and real estate agents profited off of the construction of debt-financed highways, shopping malls, schools, and parks for this upwardly mobile demographic. Cities took on billions in debt, but their black, brown, and immigrant populations saw few of the benefits. Jenkins argues that the history of municipal debt is intimately tied to the history of racial disparity in American cities, and that interventions in the politics of bond markets could enable municipalities "to avoid the punitive credit ratings that devalue certain regions or populations over others."

We spoke to Jenkins last fall about his research, which focusses on the history of racial capitalism and its consequences for democracy and inequality in the United States. His forthcoming book, The Bonds of Inequality, examines the role of municipal finance in growing American cities and widening the racial wealth gap. Jenkins is the Neubauer Family Assistant Professor of History at the University of Chicago. You can follow him here

⤷ Full Article

February 27th, 2020

The Economics of Race

On the neoclassical and stratification theories of race

Black America has had less wealth, less income, less education, and poorer health than white America for as long as records have been kept. To account for this disparity, economists have advanced three explanations: genetic, cultural, and structural. While the first of these had mostly fallen out of favor among social scientists by the mid-20th century (until a worrying revival in recent decades), the latter two have been adopted by somewhat distinct research communities that frequently collide. According to the cultural theory, racial disparities are the result of social capital deficits. This is the view that has been most widely adopted by the mainstream of the economics profession, and I refer to it as the neoclassical economics of race. By contrast, the structural theory argues that racial disparities in socioeconomic outcomes are created and maintained over time by American institutions, which privilege White Americans at the expense of Black Americans. This view is known as stratification economics, and, as I argue here, it offers a more accurate and empirically sound explanation for racial disparities in America than its counterpart. The neoclassical and stratification approaches disagree over the causes of and remedies for racial disparities in socioeconomic outcomes and differ substantially in their understanding of income, education, wealth, and health.

⤷ Full Article

January 30th, 2020

The Long History of Algorithmic Fairness

Fair algorithms from the seventeenth century to the present

As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.”

But general aspirations to fair algorithms have a long history. In these notes, I recount some past attempts to answer questions of fairness through the use of algorithms. My purpose is not to be exhaustive or completist, but instead to suggest some major transformations in those attempts, pointing along the way to scholarship that has informed my account.

⤷ Full Article

October 17th, 2019

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article

September 9th, 2019

Original & Forgery

MULTIPLY EFFECT

The difficulties of causal reasoning and race

While the thorny ethical questions dogging the development and implementation of algorithmic decision systems touch on all manner of social phenomena, arguably the most widely discussed is that of racial discrimination. The watershed moment for the algorithmic ethics conversation was ProPublica's 2016 article on the COMPAS risk-scoring algorithm, and a huge number of ensuing papers in computer science, law, and related disciplines attempt to grapple with the question of algorithmic fairness by thinking through the role of race and discrimination in decision systems.

In a paper from earlier this year, ISSA KOHLER-HAUSMAN of Yale Law School examines the way that race and racial discrimination are conceived of in law and the social sciences. Challenging the premises of an array of research across disciplines, Kolher-Hausmann argues for both a reassessment of the basis of reasoning about discrimination, and a new approach grounded in a social constructivist view of race.

From the paper:

"This Article argues that animating the most common approaches to detecting discrimination in both law and social science is a model of discrimination that is, well, wrong. I term this model the 'counterfactual causal model' of race discrimination. Discrimination, on this account, is detected by measuring the 'treatment effect of race,' where treatment is conceptualized as manipulating the raced status of otherwise identical units (e.g., a person, a neighborhood, a school). Discrimination is present when an adverse outcome occurs in the world in which a unit is 'treated' by being raced—for example, black—and not in the world in which the otherwise identical unit is 'treated' by being, for example, raced white. The counterfactual model has the allure of precision and the security of seemingly obvious divisions or natural facts.

Currently, many courts, experts, and commentators approach detecting discrimination as an exercise measuring the counterfactual causal effect of race-qua-treatment, looking for complex methods to strip away confounding variables to get at a solid state of race and race alone. But what we are arguing about when we argue about whether or not statistical evidence provides proof of discrimination is precisely what we mean by the concept DISCRIMINATION."

Link to the article. And stay tuned for a forthcoming post on the Phenomenal World by JFI fellow Lily Hu that grapples with these themes.

  • For an example of the logic Kohler-Hausmann is writing against, see Edmund S. Phelps' 1972 paper "The Statistical Theory of Racism and Sexism." Link.
  • A recent paper deals with the issue of causal reasoning in an epidemiological study: "If causation must be defined by intervention, and interventions on race and the whole of SeS are vague or impractical, how is one to frame discussions of causation as they relate to this and other vital issues?" Link.
  • From Kohler-Hausmann's footnotes, two excellent works informing her approach: first, the canonical book Racecraft by Karen Fields and Barbara Fields; second, a 2000 article by Tukufu Zuberi, "Decracializing Social Statistics: Problems in the Quantification of Race." Link to the first, link to the second.
⤷ Full Article