Phenomenal World

November 15th, 2019

Phenomenal World

Phenomenal Works: Beth Popp Berman

On knowledge, institutions, and social policy

Editor's Note: This is the second post in a new series, Phenomenal Works, in which we invite our favorite researchers to share notable readings with us. We'll be publishing new editions every two weeks, around our regular output of interviews and analysis. Sign up for our newsletter to stay up to date with every post.

Beth Popp Berman is sociologist whose research focuses on the history of knowledge, organizations and public policy making. Her first book, Creating the Market University: How Academic Science Became an Economic Engine, examines the transformation of American academia from partially noncommercial institution to innovation-oriented entrepreneurial university. Popp Berman's forthcoming book, Thinking Like an Economist: How Economics Became the Language of U.S. Public Policy, charts how a style of economic reasoning pioneered among a small group of DoD technocrats became institutionalized at the core of the policy process—and its fundamental consequences for political decision-making.

Popp Berman's selections reflect the import of her own work, illuminating how and why certain forms of knowledge came to be produced, and how they are put to use in the construction of policy and institutions.

⤷ Full Article

November 7th, 2019

Collective Ownership in the Green New Deal

What rural electrification can teach us about a just transition

This year, we once again shattered the record for atmospheric carbon concentration, and witnessed a series of devastating setbacks in US climate policy—from attempts to waive state protections against pipelines to wholesale attacks on climate science. Against this discouraging backdrop, one idea has inspired hope: the “Green New Deal,” a bold vision for addressing both the climate crisis and the crushing inequalities of our economy by transitioning onto renewable energy and generating up to 10 million well paid jobs in the process. It’s an exciting notion, and it’s gaining traction—top Democratic presidential candidates have all revealed plans for climate action that engage directly with the Green New Deal. According to the Yale Project on Climate Communications, as of May 2019, the Green New Deal had the support of 96% of liberal democrats, 88% of moderate democrats, 64% of moderate republicans, and 32% of conservative republicans. In order to succeed, however, a Green New Deal must prioritize projects that are owned and controlled by frontline communities.

Whose power lines? Our power lines!

Efforts to electrify the rural South during the New Deal present a useful case study for understanding the impact of ownership models on policy success. Up until the mid-1930s, 9 out of 10 Southern households had no access to electricity, and local economies remained largely agricultural. Southern communities were characterized by low literacy rates and a weak relationship to the cash nexus, distancing them from the federal government both culturally and materially. They were also economically destitute—a series of droughts throughout the 20s led to the proliferation of foreclosures and tenant farming. With the initial purpose of promoting employment in the area, the Roosevelt administration launched the Rural Electrification Administration in 1935. The Rural Electrification Act of 1936 sought to extend electrical distribution, first by establishing low-interest loans to fund private utility companies. The utility companies turned them down: private shareholders had little reason to invest in sparsely populated and impoverished counties, whose residents could not be assured to pay for services; private investors lacked the incentive to fund electrification for the communities who needed it most.

⤷ Full Article

October 31st, 2019

Phenomenal Works: Leah Stokes

Editor's Note: This is the first post in a new series, Phenomenal Works, in which we invite our favorite researchers to share notable readings with us. We'll be publishing new editions every two weeks, around our regular output of interviews and analysis. Sign up for our newsletter to stay up to date with every post.

Leah Stokes is an Assistant Professor in the Department of Political Science at the University of Santa Barbara. Her research spans representation and public opinion, voting behavior, and environmental and energy politics. Her forthcoming book is titled Short Circuiting Policy and examines the role of interest groups in weakening environmental protection policy across the United States. Her academic work has been published widely in top journals, her journalism and opinion writing has been published widely including in the New York Times and The Washington Post, and she is frequently cited in news media of all kinds. You can follow her on Twitter here.

Stokes's work is indispensable for anyone who wants to understand the politics of energy policy—and think through possible ways forward in the climate crisis. Below, her selection of must-read research.

⤷ Full Article

October 24th, 2019

Exploitation, Cooperation, and Distributive Justice

An interview with John Roemer

Throughout his career, John Roemer's work has been uniquely situated between the fields of microeconomics, game theory, philosophy, and political science. His research makes use of the tools of classical economics to analyze dynamics typically thought to be outside the scope of economics: from notions of fairness and morality, to the possibility of overcoming capitalist social relations. In doing so, it defends those tools against charges that they can’t describe the behaviors we see, at the same time as it renders vital social questions digestible for disciplines that rarely engage them.

Roemer is perhaps best known for his contributions to theories of distributive justice. Within the field of moral philosophy, he is one of a handful of scholars who have sought to formalize distributive theories in order to compare their merits. To moral philosophers, he argues that outright dismissal of consequentialist theories of justice, and their replacement by complicated deontological models, is a mistake. And to the world of economics, he posits that economic theory cannot be divorced from moral philosophy—that the emphasis on reaching equilibrium itself necessarily carries moral assumptions.

⤷ Full Article

October 17th, 2019

Disparate Causes, pt. II

On the hunt for the correct counterfactual

An accurate understanding of the nature of race in our society is a prerequisite for an adequate normative theory of discrimination. If, as part one of this post suggests, limiting discrimination to only direct effects of race misunderstands the nature of living as a raced subject in a raced society, then perhaps the extension of the scope of discrimination to also include indirect effects of race would better approximate the social constructivist view of race.

Recent approaches to causal and counterfactual fairness seek fair decision procedures “achieved by correcting the variables that are descendants of the protected attribute along unfair pathways.”1 The method, thus, cancels out certain effects that are downstream of race in the diagram, thereby retaining only those path-specific effects of race that are considered fair. Despite the expanded scope of what counts as a discriminatory effect, the logic of the Path-Specific Effects method follows that of the original Pearlian causal counterfactual model of discrimination: race, as a sensitive attribute, is toggled white or black atop a causal diagram, and its effect cascades down various paths leading to the outcome variable. But, this time, the causal fairness technician does more than measure and limit the direct effect of race on the final outcome; she now also measures effects of race that are mediated by other attributes, keeping only those effects carried along paths deemed “fair.”

⤷ Full Article

October 11th, 2019

Disparate Causes, pt. I

The shortcomings of causal and counterfactual thinking about racial discrimination

Legal claims of disparate impact discrimination go something like this: A company uses some system (e.g., hiring test, performance review, risk assessment tool) in a way that impacts people. Somebody sues, arguing that it has a disproportionate adverse effect on racial minorities, showing initial evidence of disparate impact. The company, in turn, defends itself by arguing that the disparate impact is justified: their system sorts people by characteristics that—though incidentally correlated with race—are relevant to its legitimate business purposes. Now, the person who brought the discrimination claim is tasked with coming up with an alternative—that is, a system with less disparate impact and still fulfills the company’s legitimate business interest. If the plaintiff finds such an alternative, it must be adopted. If they don’t, the courts have to, in theory, decide how to tradeoff between disparate impact and legitimate business purpose.

Much of the research in algorithmic fairness, a discipline concerned with the various discriminatory, unfair, and unjust impacts of algorithmic systems, has taken cues from this legal approach—hence, the deluge of parity-based “fairness” metrics mirroring disparate impact that have received encyclopedic treatment by computer scientists, statisticians, and the like in the past few years. Armed with intuitions closely linked with disparate impact litigation, scholars further formalized the tradeoffs between something like justice and something like business purpose—concepts that crystallized in the literature under the banners of “fairness” and “efficiency.”

⤷ Full Article

September 26th, 2019

Counter-Optimizing the Crisis

An interview with Seda Gürses and Bekah Overdorf

Software that structures increasingly detailed aspects of contemporary life is built for optimization. These programs require a mapping of the world in a way that is computationally legible, and translating the messy world into one that makes sense to a computer is imperfect. Even in the most ideal conditions, optimization systems—constrained, more often than not, by the imperatives of profit-generating corporations—are designed to ruthlessly maximize one metric at the expense of others. When these systems are optimizing over large populations of people, some people lose out in the calculation.

Official channels for redress offer little help: alleviating out-group concerns is by necessity counter to the interests of the optimization system and its target customers. Like someone who lives in a flight path but has never bought a plane ticket complaining about the noise to an airline company, the collateral damage of optimization has little leverage over the system provider unless the law can be wielded against it. Beyond the time-intensive and uncertain path of traditional advocacy, what recourse is available for those who find themselves in the path of optimization?

In their 2018 paper POTs: Protective Optimization Technologies (updated version soon forthcoming at this same link), authors Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses offer some answers. Eschewing the dominant frameworks used to analyze and critique digital optimization systems, the authors offer an analysis that illuminates fundamental problems with both optimization systems and the proliferating literature that attempts to solve them.

POTs—the analytical framework and the technology—suggest that the inevitable assumptions, flaws, and rote nature of optimization systems can be exploited to produce “solutions that enable optimization subjects to defend from unwanted consequences.” Despite their overbearing nature, optimization systems typically require some degree of user input; POTs uses this as a wedge for individuals and groups marginalized by the optimization system to influence its operation. In so doing, POTs find a way to restore what optimization seeks to hide, revealing that what gets laundered as technical problems are actually political ones.

Below, we speak with Seda Gürses and Bekah Overdorf, two members of the POTs team, who discuss the definition of optimization system, the departures the POTs approach makes from the digital ethics literature, and the design and implementation of POTs in the wild.

⤷ Full Article

September 12th, 2019

Money Parables

Three competing theories of money

In the past year, Modern Monetary Theory (MMT) has shifted the policy debate in a way that few heterodox schools of economic thought have in recent memory. MMT’s central notion—that nations with their own strong currencies face no inherent financial constraints—has made its way into politics and, notably, the world of finance. The last few months have brought MMT explainers from financial media outlets including Reuters, CNBC, Bloomberg, Barron’s, and Business Insider, as well as from investment analysts at Wall Street firms including Goldman Sachs, Bank of America, Fitch, Standard Chartered and Citigroup.

Popularizing the shorthand notion that “deficits don’t matter” has been an achievement for those promulgating MMT. Yet one largely unappreciated change brought about by the MMT debates involves a somewhat subtler point: a shift in the implicit story we tell about money.

The rise of MMT poses a challenge to the mainstream commodity money story. That parable, familiar to anyone who has taken high school economics or read Adam Smith, involves an inefficient barter system that gives way to the more convenient use of some token that represents value, typically a precious metal. If government plays a role in this story, it is only to regulate money after the marketplace births it.

The MMT parable—known in the literature as chartalism—reverses the commodity money view. For chartalists, money arises through an act of law, namely the levying of a tax which requires citizens to go out and get that which pays taxes; the state comes first and markets are subsequent. As Abba Lerner puts it, money is “a creature of the state.”

⤷ Full Article

August 23rd, 2019

Is it impossible to be fair?

Statistical prediction is increasingly pervasive in our lives. Can it be fair?

The Allegheny Family Screening Tool is a computer program that predicts whether a child will later have to be placed into foster care. It's been used in Allegheny County, Pennsylvania, since August 2016. When a child is referred to the county as at risk of abuse or neglect, the program analyzes administrative records and then outputs a score from 1 to 20, where a higher score represents a higher risk that the child will later have to be placed into foster care. Child welfare workers use the score to help them decide whether to investigate a case further.

Travel search engines like Kayak or Google Flights predict whether a flight will go up or down in price. Farecast, which launched in 2004 and was acquired by Microsoft a few years later, was the first to offer such a service. When you look up a flight, these search engines analyze price records and then predict whether the flight's price will go up or down over some time interval, perhaps along with a measure of confidence in the prediction. People use the predictions to help them decide when to buy a ticket.

⤷ Full Article

August 8th, 2019

Networks, Weak Ties, and Thresholds

An Interview with Mark Granovetter

Few living scholars have had the influence of Mark Granovetter. In a career spanning almost 50 years, his seminal contributions to his own field of sociology have spread to shape research in economics, computer science, and even epidemiology.

Granovetter is most widely known for his early contributions to social network analysis—in particular his 1973 article, “The Strength of Weak Ties.” In that paper, Granovetter demonstrated that, because of the way social networks evolve, “weak ties” between people often form bridges between clusters of more strongly connected individuals and thus serve as important conduits of novel information. This surprising finding has proven to have important and enduring implications for a diverse range of fields. The paper remains one of the most cited social science articles of all time.

⤷ Full Article