➔ Phenomenal World

July 7th, 2018

➔ Phenomenal World

Quodlibet

EVIDENCE PUZZLES

The history and politics of RCTs 

In a 2016 working paper, JUDITH GUERON recounts and evaluates the history of randomized controlled trials (RCTs) in the US, through her own experience in the development of welfare experiments through the MDRC and the HHS: 

“To varying degrees, the proponents of welfare experiments at MDRC and HHS shared three mutually reinforcing goals. The first was to obtain reliable and—given the long and heated controversy about welfare reform—defensible evidence of what worked and, just as importantly, what did not. Over a pivotal ten years from 1975 to 1985, these individuals became convinced that high-quality RCTs were uniquely able to produce such evidence and that there was simply no adequate alternative. Thus, their first challenge was to demonstrate feasibility: that it was ethical, legal, and possible to implement this untried—and at first blush to some people immoral—approach in diverse conditions. The other two goals sprang from their reasons for seeking rigorous evidence. They were not motivated by an abstract interest in methodology or theory; they wanted to inform policy and make government more effective and efficient. As a result, they sought to make the body of studies useful, by assuring that it addressed the most significant questions about policy and practice, and to structure the research and communicate the findings in ways that would increase the potential that they might actually be used." 

⤷ Full Article

June 30th, 2018

The Duel

CLIMATE PREDICTION MARKET

How to link a carbon tax to climate forecasting

A 2011 paper by SHI-LING HSU suggests a way of using a carbon tax to generate more accurate predictions of future climate conditions:

“The market for tradable permits to emit in the future is essentially a prediction market for climate outcomes. And yet, unlike prediction markets that have been operated or proposed thus far, this prediction market for climate outcomes operates against the backdrop of an actual and substantial tax liability. Whereas prediction markets have heretofore largely involved only recreational trading, this prediction market will operate against a regulatory backdrop and thus will provide much stronger incentives for traders to acquire and trade on information.”

 Link popup: yes to the full paper.

A 2018 paper by GARY LUCAS and FELIX MORMANN suggests using similar predictions for climate policies beyond carbon taxes:

“We explain how both the federal and state governments could use prediction markets to help resolve high-profile controversies, such as how best to allocate subsidies to promote clean technology innovation and which policy strategy promises the greatest reduction in carbon emissions.” 

Link popup: yes to their paper.

  • In 2016, a group of researchers modeled the way that information would converge in a climate prediction market, and found “market participation causes most traders to converge quickly toward believing the ‘true’ climate model, suggesting that a climate market could be useful for building public consensus.” Link popup: yes.
  • Tyler Cowen wrote about Hsu’s paper in 2011: “I think of such fine-tuning as a misguided approach. Is there such a good ‘basket’ measure of climate outcomes with sufficiently low short-term volatility?” Link popup: yes.
  • A 2017 paper by Michael Thicke makes a similar point about prediction models for science generally: “Prediction markets for science could be uninformative or deceptive because scientific predictions are often long-term, while prediction markets perform best for short-term questions.” Link popup: yes.
⤷ Full Article

June 23rd, 2018

Yielding Stone

VISIBLE CONSTRAINT

Including protected variables can make algorithmic decision-making more fair 

A recent paper co-authored by JON KLEINBERG, JENS LUDWIG, SENDHIL MULLAINATHAN, and ASHESH RAMBACHAN addresses algorithmic bias, countering the "large literature that tries to 'blind' the algorithm to race to avoid exacerbating existing unfairness in society":  

"This perspective about how to promote algorithmic fairness, while intuitive, is misleading and in fact may do more harm than good. We develop a simple conceptual framework that models how a social planner who cares about equity should form predictions from data that may have potential racial biases. Our primary result is exceedingly simple, yet often overlooked: a preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (such as setting a different threshold for different groups) but the estimated prediction function itself should not change. Absent legal constraints, one should include variables such as gender and race for fairness reasons.

Our argument collects together and builds on existing insights to contribute to how we should think about algorithmic fairness.… We empirically illustrate this point for the case of using predictions of college success to make admissions decisions. Using nationally representative data on college students, we underline how the inclusion of a protected variable—race in our application—not only improves predicted GPAs of admitted students (efficiency), but also can improve outcomes such as the fraction of admitted students who are black (equity).

Across a wide range of estimation approaches, objective functions, and definitions of fairness, the strategy of blinding the algorithm to race inadvertently detracts from fairness."

Read the full paper here popup: yes.

⤷ Full Article

June 16th, 2018

Phantom Perspective

ROLL CALL 

A new report from Fordham CLIP sheds light on the market for student list data from higher education institutions

From the paper authored by N. CAMERON RUSSELL, JOEL R. REIDENBERG, ELIZABETH MARTIN, and THOMAS NORTON of the FORDHAM CENTER ON LAW AND INFORMATION POLICY: 

“Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

This information is being collected, marketed, and sold about individuals because they are students."

Drawing from publicly-available sources, public records requests from educational institutions, and marketing materials sent to high school students gathered over several years, the study paints an unsettling portrait of the murky market for student list data, and makes recommendations for regulatory response: 

  1. The commercial marketplace for student information should not be a subterranean market. Parents, students, and the general public should be able to reasonably know (i) the identities of student data brokers, (ii) what lists and selects they are selling, and (iii) where the data for student lists and selects derives. A model like the Fair Credit Reporting Act (FCRA) should apply to compilation, sale, and use of student data once outside of schools and FERPA protections. If data brokers are selling information on students based on stereotypes, this should be transparent and subject to parental and public scrutiny.
  2. Brokers of student data should be required to follow reasonable procedures to assure maximum possible accuracy of student data. Parents and emancipated students should be able to gain access to their student data and correct inaccuracies. Student data brokers should be obligated to notify purchasers and other downstream users when previously-transferred data is proven inaccurate and these data recipients should be required to correct the inaccuracy.
  3. Parents and emancipated students should be able to opt out of uses of student data for commercial purposes unrelated to education or military recruitment.
  4. When surveys are administered to students through schools, data practices should be transparent, students and families should be informed as to any commercial purposes of surveys before they are administered, and there should be compliance with other obligations under the Protection of Pupil Rights Amendment (PPRA)."
⤷ Full Article

June 9th, 2018

Ego

PAVEMENT, NURSING, MISSILES

Algorithm Tips, a compilation of "potentially newsworthy algorithms" for journalists and researchers

DANIEL TRIELLI, JENNIFER STARK, and NICK DIAKOPOLOUS and Northwestern’s Computational Journalism Lab created this searchable, non-comprehensive list of algorithms in use at the federal, state, and local levels. The “Methodology” page explains the data-scraping process, then the criteria for inclusion:

“We formulated questions to evaluate the potential newsworthiness of each algorithm:

Can this algorithm have a negative impact if used inappropriately?
Can this algorithm raise controversy if adopted?
Is the application of this algorithm surprising?
Does this algorithm privilege or harm a specific subset of people?
Does the algorithm have the potential of affecting a large population or section of the economy?

If the answers for any of these questions were 'yes', the algorithm could be included on the list."

Link popup: yes. The list includes a huge range of applications, from a Forest Service algorithmic ranking of invasive plants, to an intelligence project meant to discover “significant societal events” from public data—and pavement, nursing, and missiles too.

  • Nick Diakopolous also wrote a guide for journalists on investigating algorithms: “Auditing algorithms is not for the faint of heart. Information deficits limit an auditor’s ability to sometimes even know where to start, what to ask for, how to interpret results, and how to explain the patterns they’re seeing in an algorithm’s behavior. There is also the challenge of knowing and defining what’s expected of an algorithm, and how those expectations may vary across contexts.” Link popup: yes.
  • The guide is a chapter from the upcoming Data Journalism Handbook popup: yes. One of the partner organizations behind the guide has a website of advice and stories popup: yes from the data-reporting trenches, such as this popup: yes on trying to figure out prescription drug deaths: “The FDA literally found three different ways to spell ASCII. This was a sign of future surprises.”
⤷ Full Article

June 2nd, 2018

Even With Closed Eyes

ARTIFICIAL INFERENCE

Causal reasoning and machine learning 

In a recent paper titled "The Seven Pillars of Causal Reasoning with Reflections on Machine Learning", JUDEA PEARL, professor of computer science at UCLA and author of Causality popup: yes, writes:

“Current machine learning systems operate, almost exclusively, in a statistical or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling." 

The tasks include work on counterfactuals, and new approaches to handling incomplete data. Link popup: yes to the paper. A vivid expression of the issue: "Unlike the rules of geometry, mechanics, optics or probabilities, the rules of cause and effect have been denied the benefits of mathematical analysis. To appreciate the extent of this denial, readers would be stunned to know that only a few decades ago scientists were unable to write down a mathematical equation for the obvious fact that 'mud does not cause rain.' Even today, only the top echelon of the scientific community can write such an equation and formally distinguish 'mud causes rain' from 'rain causes mud.'”

Pearl also has a new book out, co-authored by DANA MCKENZIE, in which he argues for the importance of determining cause and effect in the machine learning context. From an interview in Quanta magazine about his work and the new book:

"As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. If we want machines to reason about interventions ('What if we ban cigarettes?') and introspection ('What if I had finished high school?'), we must invoke causal models. Associations are not enough—and this is a mathematical fact, not opinion.

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans."

Link popup: yes to the interview. (And link popup: yes to the book page.) 

⤷ Full Article

May 26th, 2018

Correction of the Lines

SHOCK-LEVEL-ZERO

Jobs guarantees vs. basic income

In a characteristically lengthy and thorough post, SCOTT ALEXANDER of SLATE STAR CODEX argues for a basic income over a jobs guarantee, in dialogue with a post by SIMON SARRIS.

Here's how Alexander addresses the claim that “studies of UBI haven’t been very good, so we can’t know if it works”:

“If we can’t 100% believe the results of small studies – and I agree that we can’t – our two options are to give up and never do anything that hasn’t already been done, or to occasionally take the leap towards larger studies. I think basic income is promising enough that we need to pursue the second. Sarris has already suggested he won’t trust anything that’s less than permanent and widespread, so let’s do an experiment that’s permanent and widespread.”

Link to the full piece on Slate Star.

For another angle on the same question, MARTIN RAVALLION recently published a paper at the CENTER FOR GLOBAL DEVELOPMENT looking at employment guarantees and income guarantees primarily in India:

“The paper has pointed to evidence for India suggesting that the country’s Employment Guarantee Schemes have been less cost effective in reducing current poverty through the earnings gains to workers than one would expect from even untargeted transfers, as in a UBI. This calculation could switch in favor of workfare schemes if they can produce assets of value (directly or indirectly) to poor people, though the evidence is mixed on this aspect of the schemes so far in India.”

Ravallion takes a nuanced view of arguments for the right to work and the right to income, as well as the constraints of implementation, and concludes, "The key point is that, in some settings, less effort at fine targeting may well prove to be more cost-effective in assuring economic freedom from material deprivation." Full study available here. ht Sidhya

⤷ Full Article

May 12th, 2018

Lay of the Land

LABOR-LEISURE TRADE-OFF

A new paper on the labor effects of cash transfers

SARAH BAIRD, DAVID MCKENZIE, and BERK OZLER of the WORLD BANK review a variety of cash transfer studies, both governmental and non-governmental, in low- and middle-income countries. Cash transfers aren’t shown to have the negative effects on work that some fear:

"The basic economic model of labor supply has a very clear prediction of what should be expected when an adult receives an unexpected cash windfall: they should work less and earn less. This intuition underlies concerns that many types of cash transfers, ranging from government benefits to migrant remittances, will undermine work ethics and make recipients lazy.

Overall, cash transfers that are made without an explicit employment focus (such as conditional and unconditional cash transfers and remittances) tend to result in little to no change in adult labor. The main exceptions are transfers to the elderly and some refugees, who reduce work. In contrast, transfers made for job search assistance or business start-up tend to increase adult labor supply and earnings, with the likely main channels being the alleviation of liquidity and risk constraints."

Link popup: yes to the working paper. Table 2—which covers the channels through which cash impacts labor, is especially worth a read—as many studies on cash transfers don’t go into this level of detail.

  • A study on a large-scale unconditional cash transfer in Iran: "With the exception of youth, who have weak ties to the labor market, we find no evidence that cash transfers reduced labor supply, while service sector workers appear to have increased their hours of work, perhaps because some used transfers to expand their business." Link popup: yes.
  • Continuing the analysis of Hauschofer and Schapiro’s controversial results popup: yes from a cash study transfer in Kenya, Josh Rosenberg at GiveDirectly has, at the end of his overview, some thoughtful questions for continuing research: "Is our cost-effectiveness model using a reasonable framework for estimating recipients’ standard of living over time?… GiveDirectly provides large, one-time transfers whereas many government cash transfers provide smaller ongoing support to poor families. How should we apply new literature on other kinds of cash programs to our estimates of the effects of GiveDirectly?" Link popup: yes.
⤷ Full Article

October 28th, 2019

Radius

RELATIVE DUTIES

The origins of American tax policy

Tax reform is at the forefront of contemporary policy debate. US citizens pay taxes at lower rates than their European counterparts, and a growing number of researchers agree that progressive taxes on wealth and income have the potential to rectify inequality. The historically less progressive nature of American tax policy is commonly explained as a product of the colonies' early opposition to "taxation without representation," as well as the large population of immigrants, the absence of traditional aristocracy, and the ubiquity of "country party republican" ideology which characterized the country's formation.

In an essay accompanying the publication of her 2006 book, historian ROBIN EINHORN introduces a new factor into the debate: the impact of domestic politics around slavery on early American state-building. From the piece:

"Americans are right to think that our anti-tax and anti-government attitudes have deep historical roots. Our mistake is to dig for them in Boston. We should be digging in Virginia and South Carolina rather than in Massachusetts or Pennsylvania, because the origins of these attitudes have more to do with the history of American slavery than the history of American freedom. In 1776, Congress was talking about slavery because its members were framing a national government for the new nation—what would become the Articles of Confederation. Trying to figure out how to count the population to distribute tax burdens to the various states, the members inevitably faced the problem of whether to count the population of enslaved African Americans. Since slaves were 4% of the population in the North and 37% of the population in the South, this decision would have a huge impact on the tax burdens of the white taxpayers of the northern and southern states.

Slaveholders developed three solutions to this general problem. First, they tried to guarantee that they dominated the legislative process by manipulating the representation rules. Second, they demanded weak governments that would make few of the decisions that provoked discussions of slavery. Third, they insisted on constraining the tax power through constitutional limitations on its use. Yet the real slaveholder victory lay in a fourth strategy—persuading the nonslaveholding majorities that the weak government and constitutionally restrained tax power actually were in the interests of the nonslaveholders themselves. Slaveholders persuaded many of their contemporaries that expansions of slavery are expansions of 'liberty,' constitutional limitations on democratic self-government are defenses of 'equal rights,' and the power of slaveholding elites is the power of the 'common man.' In the topsy-turvy political world we have inherited from the age of slavery, the power of the majority to decide how to tax became the power of an alien 'government' to oppress 'the people.'"

Link to the essay, and link to a 2000 academic article by Einhorn which presents the argument in greater historical detail.

  • "The growth in cash transactions was critical to the evolution of the modern income tax. Because the market's cash nexus permitted more and more individuals to derive a greater portion of their income and wealth from the sale of their labor services, lawmakers were able to more easily measure and tap the growing tax base. Consequently, the national tax structure began to shift away from a reliance on indirect levies, namely import duties and excise taxes on alcohol and tobacco, toward more direct and graduated taxes on income and wealth transfers." Ajay Mehrotra looks at the economic developments behind the passage of the 16th Amendment in 1913. Link.
  • In a new paper, Lucy Barnes links tax progressivity to the strength of capital-labor coalitions in European countries prior to World War I. Link.
  • A 2017 paper by Raymond Fisman, Keith Gladstone, Ilyana Kuziemko, and Suresh Naidu offers the first ever evidence on the taxation preferences of US citizens, finding that Americans are more likely to support taxes on wealth than on savings. Link. See also this 2016 paper by Naidu, Felipe González, and Guillermo Marshall on the role of slave property rights in promoting early American economic development. Link.
⤷ Full Article

May 5th, 2018

Aesthetic Integration

POSTAL OPTION 

Renewed interest in an old model 

Last week we linked to the widely publicized news popup: yes that SENATOR KIRSTEN GILLIBRAND would be pushing legislation to reintroduce government-run commercial banking through the United States Postal Service.

Link popup: yes to the announcement, and link popup: yes to Gillibrand's Twitter thread on the plan.

In a 2014 article for the HARVARD LAW REVIEW, law professor and postal banking advocate MEHRSA BARADARAN describes the context that makes postal banking an appealing solution: 

“Credit unions, S&Ls, and Morris Banks were alternatives to mainstream banks, but they were all supported and subsidized by the federal government through targeted regulation and deposit insurance protection.

Banking forms homogenized in the 1970s and 1980s, leaving little room for variation in institutional or regulatory design. Eventually, each of these institutions drifted from their initial mission of serving the poor and began to look more like commercial banks, even competing with them for ever-shrinking profit margins.

The result now is essentially two forms of banks: regulated mainstream banks that seek maximum profit for their shareholders by serving the needs of the wealthy and middle class, and unregulated fringe banks that seek maximum profits for their shareholders by serving the banking and credit needs of the poor. What is missing from the American banking landscape for the first time in almost a century is a government-sponsored bank whose main purpose is to meet the needs of the poor."

⤷ Full Article