The Phenomenal World

August 18th, 2018

The Phenomenal World

House Fronts

CASH TRANSFERS IN IRAN | PERCEPTIONS OF WELFARE

COMPENSATION TREATMENT

In Iran, cash transfers don't reduce labor supply

A new study examines the effects of Iran's changeover from energy subsidies to cash transfers. From the abstract, by DJAVAD SALEHI-ISFAHANI and MOHAMMED H. MOSTAFAVI-DEHZOOEI of the ECONOMIC RESEARCH FORUM:

“This paper examines the impact of a national cash transfer program on labor supply in Iran. [...] We find no evidence that cash transfers reduced labor supply, in terms of hours worked or labor force participation. To the contrary, we find positive effects on the labor supply of women and self-employed men.”

Most recent version here. The ungated working paper is available here.

  • Another paper co-authored by Salehi-Isfahani further details the energy subsidies program and the role that cash transfers played in the reforms, with a specific focus on differences in take-up. Link.
  • We’ve previously shared work from Damon Jones and Ioana Marinescu on the Alaska Permanent Fund dividend, which found that “a universal and permanent cash transfer does not significantly decrease aggregate employment.” Link.
  • In other Basic Income news, petitions and protests are being organized in response to the cancellation of the Ontario pilot.
⤷ Full Article

August 11th, 2018

Constellation

AUTOMATION AND CAPITAL STOCK | METARESEARCH

ALCHEMIST STOCK

Automation, employment, and capital investment

At his blog STUMBLING AND MUMBLING, CHRIS DILLOW discusses recent reporting on rapid automation fears in the United Kingdom:

"'More than six million workers are worried their jobs could be replaced by machines over the next decade' says the Guardian. This raises a longstanding paradox – that, especially in the UK, the robot economy is much more discussed than observed.

What I mean is that the last few years have seen pretty much the exact opposite of this. Employment has grown nicely whilst capital spending has been sluggish. The ONS says that 'annual growth in gross fixed capital formation has been slowing consistently since 2014.' And the OECD reports that the UK has one of the lowest usages of industrial robots in the western world.

My chart, taken from the Bank of England and ONS, puts this into historic context. It shows that the gap between the growth of the non-dwellings capital stock and employment growth has been lower in recent years than at any time since 1945. The time to worry about machines taking people’s jobs was the 60s and 70s, not today.… If we looked only at the macro data, we’d fear that people are taking robots' jobs – not vice versa."

Link to the post.

⤷ Full Article

August 4th, 2018

The Great Abundance

CARBON TAXES | WORLD TRADE DATABASE

ENERGY BOOM

A new carbon tax proposal and a big new carbon tax research report

Representative Carlos Curbelo (R-FL) introduced a carbon tax bill to the House last week (though it is “sure to fail” with the current government, it's unusual to see a carbon tax proposed by a Republican). According to Reuters, “Curbelo said the tax would generate $700 billion in revenue over a decade for infrastructure investments.” A deep analysis is available from The Center on Global Energy Policy at Columbia SIPA, which started up a Carbon Tax Initiative this year.

For a broader look at carbon taxes, earlier this month the Columbia initiative published a significant four-part series on the “economic, energy, and environmental implications of federal carbon taxes” (press release here).

The overview covers impacts on energy sources:

“The effects of a carbon tax on prices are largest for energy produced by coal, followed by oil, then natural gas, due to the difference in carbon intensity of each fuel. Every additional dollar per ton of the carbon tax increases prices at the pump by slightly more than one cent per gallon for gasoline and slightly less than one cent per gallon for diesel.”

And examines a few possible revenue uses:

“How the carbon tax revenue is used is the major differentiating factor in distributional outcomes. A carbon tax policy can be progressive, regressive, or neither.”

Overview here. Link to report on energy and environmental implications; link to report on distributional implications; link to report on implications for the economy and household welfare.

⤷ Full Article

July 28th, 2018

Jetty

QUANTITATIVE ECONOMIC HISTORY | BEHAVIORAL ECON POLICY | CHINA IN THE 20TH CENTURY

BANKING AS ART

On the history of economists in central banks 

A recent paper by FRANÇOIS CLAVEAU and JÉRÉMIE DION applies quantitative methods to the historical study of central banks, demonstrating the transition of central banking from an "esoteric art" to a science, the growth of economics research within central banking institutions, and the corresponding rise in the dominance of central banks in the field of monetary economics. From the paper: 

"We study one type of organization, central banks, and its changing relationship with economic science. Our results point unambiguously toward a growing dominance of central banks in the specialized field of monetary economics. Central banks have swelling research armies, they publish a growing share of the articles in specialized scholarly journals, and these articles tend to have more impact today than the articles produced outside central banks."

Link to the paper, which contains a vivid 1929 dialogue between Keynes and Sir Ernest Musgrave Harvey of the Bank of England, who asserts, "It is a dangerous thing to start giving reasons." 

h/t to the always-excellent Beatrice Cherrier who highlighted this work in a brief thread and included some visualizations, including this one showing the publishing rate of central banking researchers: 

  • Via both Cherrier and the paper, a brief Economist article on the crucial significance of the central banking conference in Jackson Hole, hosted by the Federal Reserve Bank of Kansas City: "Davos for central bankers." Link. (And link to an official history of the conference.) 
  • Another paper co-authored by Claveau looks at the history of specialties in economics, using quantitative methods to map the importance of sets of ideas through time. "Among our results, especially noteworthy are (1) the clear-cut existence of ten families of specialties, (2) the disappearance in the late 1970s of a specialty focused on general economic theory, (3) the dispersal of the econometrics-centered specialty in the early 1990s and the ensuing importance of specific econometric methods for the identity of many specialties since the 1990s, and (4) the low level of specialization of individual economists throughout the period in contrast to physicists as early as the late 1960s." Link
⤷ Full Article

July 21st, 2018

High Noon

ACTUARIAL RISK ASSESSMENT | SOCIAL SCIENCE DATA

ALTERNATIVE ACTUARY

History of risk assessment, and some proposed alternate methods 

A 2002 paper by ERIC SILVER and LISA L. MILLER on actuarial risk assessment tools provides a history of statistical prediction in the criminal justice context, and issues cautions now central to the contemporary algorithmic fairness conversations:  

"Much as automobile insurance policies determine risk levels based on the shared characteristics of drivers of similar age, sex, and driving history, actuarial risk assessment tools for predicting violence or recidivism use aggregate data to estimate the likelihood that certain strata of the population will commit a violent or criminal act. 

To the extent that actuarial risk assessment helps reduce violence and recidivism, it does so not by altering offenders and the environments that produced them but by separating them from the perceived law-abiding populations. Actuarial risk assessment facilitates the development of policies that intervene in the lives of citizens with little or no narrative of purpose beyond incapacitation. The adoption of risk assessment tools may signal the abandonment of a centuries-long project of using rationality, science, and the state to improve upon the social and economic progress of individuals and society."

Link to the paper.

A more recent paper presented at FAT* in 2018 and co-authored by CHELSEA BARABAS, KARTHIK DINAKAR, JOICHI ITO, MADARS VIRZA, and JONATHAN ZITTRAIN makes several arguments reminiscent of Silver and Miller's work. They argue in favor of causal inference framework for risk assessments aimed at working on the question "what interventions work":

"We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose.… Data-driven tools provide an immense opportunity for us to pursue goals of fair punishment and future crime prevention. But this requires us to move away from merely tacking on intervenable variables to risk covariates for predictive models, and towards the use of empirically-grounded tools to help understand and respond to the underlying drivers of crime, both individually and systemically."

Link to the paper. 

  • In his 2007 book Against Prediction, lawyer and theorist Bernard Harcourt provided detailed accounts and critiques of the use of actuarial methods throughout the criminal legal system. In place of prediction, Harcourt proposes a conceptual and practical alternative: randomization. From a 2005 paper on the same topic: "Instead of embracing the actuarial turn in criminal law, we should rather celebrate the virtues of the random: randomization, it turns out, is the only way to achieve a carceral population that reflects the offending population. As a form of random sampling, randomization in policing has significant positive value: it reinforces the central moral intuition in the criminal law that similarly situated individuals should have the same likelihood of being apprehended if they offend—regardless of race, ethnicity, gender or class." Link to the paper. (And link to another paper of Harcourt's in the Federal Sentencing Reporter, "Risk as a Proxy for Race.") 
  • A recent paper by Megan Stevenson assesses risk assessment tools: "Despite extensive and heated rhetoric, there is virtually no evidence on how use of this 'evidence-based' tool affects key outcomes such as incarceration rates, crime, or racial disparities. The research discussing what 'should' happen as a result of risk assessment is hypothetical and largely ignores the complexities of implementation. This Article is one of the first studies to document the impacts of risk assessment in practice." Link
  • A compelling piece of esoterica cited in Harcourt's book: a doctoral thesis by Deborah Rachel Coen on the "probabilistic turn" in 19th century imperial Austria. Link.
⤷ Full Article

July 14th, 2018

Traveling Light

DATA OWNERSHIP BY CONSUMERS | CLIMATE AND CULTURAL CHANGE

DATA IS NONRIVAL

Considerations on data sharing and data markets 

CHARLES I. JONES and CHRISTOPHER TONETTI contribute to the “new but rapidly-growing field” known as the economics of data:

“We are particularly interested in how different property rights for data determine its use in the economy, and thus affect output, privacy, and consumer welfare. The starting point for our analysis is the observation that data is nonrival. That is, at a technological level, data is not depleted through use. Most goods in economics are rival: if a person consumes a kilogram of rice or an hour of an accountant’s time, some resource with a positive opportunity cost is used up. In contrast, existing data can be used by any number of firms or people simultaneously, without being diminished. Consider a collection of a million labeled images, the human genome, the U.S. Census, or the data generated by 10,000 cars driving 10,000 miles. Any number of firms, people, or machine learning algorithms can use this data simultaneously without reducing the amount of data available to anyone else. The key finding in our paper is that policies related to data have important economic consequences.”

After modeling a few different data-ownership possibilities, the authors conclude, “Our analysis suggests that giving the data property rights to consumers can lead to allocations that are close to optimal.” Link to the paper.

  • Jones and Tonetti cite an influential 2015 paper by Alessandro Acquisti, Curtis R. Taylor, and Liad Wagman on “The Economics of Privacy”: “In digital economies, consumers' ability to make informed decisions about their privacy is severely hindered, because consumers are often in a position of imperfect or asymmetric information regarding when their data is collected, for what purposes, and with what consequences.” Link.
  • For more on data populi, Ben Tarnoff has a general-interest overview in Logic Magazine, including mention of the data dividend and a comparison to the Alaska Permanent Fund. Tarnoff uses the oil industry as an analogy throughout: “In the oil industry, companies often sign ‘production sharing agreements’ (PSAs) with governments. The government hires the company as a contractor to explore, develop, and produce the oil, but retains ownership of the oil itself. The company bears the cost and risk of the venture, and in exchange receives a portion of the revenue. The rest goes to the government. Production sharing agreements are particularly useful for governments that don’t have the machinery or expertise to exploit a resource themselves.” Link.
⤷ Full Article

July 7th, 2018

Quodlibet

RANDOMIZED CONTROLLED TRIALS | HIERARCHY & DESPOTISM

EVIDENCE PUZZLES

The history and politics of RCTs 

In a 2016 working paper, JUDITH GUERON recounts and evaluates the history of randomized controlled trials (RCTs) in the US, through her own experience in the development of welfare experiments through the MDRC and the HHS: 

“To varying degrees, the proponents of welfare experiments at MDRC and HHS shared three mutually reinforcing goals. The first was to obtain reliable and—given the long and heated controversy about welfare reform—defensible evidence of what worked and, just as importantly, what did not. Over a pivotal ten years from 1975 to 1985, these individuals became convinced that high-quality RCTs were uniquely able to produce such evidence and that there was simply no adequate alternative. Thus, their first challenge was to demonstrate feasibility: that it was ethical, legal, and possible to implement this untried—and at first blush to some people immoral—approach in diverse conditions. The other two goals sprang from their reasons for seeking rigorous evidence. They were not motivated by an abstract interest in methodology or theory; they wanted to inform policy and make government more effective and efficient. As a result, they sought to make the body of studies useful, by assuring that it addressed the most significant questions about policy and practice, and to structure the research and communicate the findings in ways that would increase the potential that they might actually be used." 

⤷ Full Article

June 30th, 2018

The Duel

CARBON DIVIDENDS | SECTORAL BARGAINING

CLIMATE PREDICTION MARKET

A 2011 paper by SHI-LING HSU suggests a way of using a carbon tax to generate more accurate predictions of future climate conditions:

“The market for tradable permits to emit in the future is essentially a prediction market for climate outcomes. And yet, unlike prediction markets that have been operated or proposed thus far, this prediction market for climate outcomes operates against the backdrop of an actual and substantial tax liability. Whereas prediction markets have heretofore largely involved only recreational trading, this prediction market will operate against a regulatory backdrop and thus will provide much stronger incentives for traders to acquire and trade on information.”

 Link to the full paper.

A 2018 paper by GARY LUCAS and FELIX MORMANN suggests using similar predictions for climate policies beyond carbon taxes:

“We explain how both the federal and state governments could use prediction markets to help resolve high-profile controversies, such as how best to allocate subsidies to promote clean technology innovation and which policy strategy promises the greatest reduction in carbon emissions.” 

Link to their paper.

  • In 2016, a group of researchers modeled the way that information would converge in a climate prediction market, and found “market participation causes most traders to converge quickly toward believing the ‘true’ climate model, suggesting that a climate market could be useful for building public consensus.” Link.
  • Tyler Cowen wrote about Hsu’s paper in 2011: “I think of such fine-tuning as a misguided approach. Is there such a good ‘basket’ measure of climate outcomes with sufficiently low short-term volatility?” Link.
  • A 2017 paper by Michael Thicke makes a similar point about prediction models for science generally: “Prediction markets for science could be uninformative or deceptive because scientific predictions are often long-term, while prediction markets perform best for short-term questions.” Link.
⤷ Full Article

June 23rd, 2018

Yielding Stone

FAIRNESS IN ALGORITHMIC DECISION-MAKING | ADMINISTRATIVE DATA ACCESS

VISIBLE CONSTRAINT

Including protected variables can make algorithmic decision-making more fair 

A recent paper co-authored by JON KLEINBERG, JENS LUDWIG, SENDHIL MULLAINATHAN, and ASHESH RAMBACHAN addresses algorithmic bias, countering the "large literature that tries to 'blind' the algorithm to race to avoid exacerbating existing unfairness in society":  

"This perspective about how to promote algorithmic fairness, while intuitive, is misleading and in fact may do more harm than good. We develop a simple conceptual framework that models how a social planner who cares about equity should form predictions from data that may have potential racial biases. Our primary result is exceedingly simple, yet often overlooked: a preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (such as setting a different threshold for different groups) but the estimated prediction function itself should not change. Absent legal constraints, one should include variables such as gender and race for fairness reasons.

Our argument collects together and builds on existing insights to contribute to how we should think about algorithmic fairness.… We empirically illustrate this point for the case of using predictions of college success to make admissions decisions. Using nationally representative data on college students, we underline how the inclusion of a protected variable—race in our application—not only improves predicted GPAs of admitted students (efficiency), but also can improve outcomes such as the fraction of admitted students who are black (equity).

Across a wide range of estimation approaches, objective functions, and definitions of fairness, the strategy of blinding the algorithm to race inadvertently detracts from fairness."

Read the full paper here.

⤷ Full Article

June 16th, 2018

Phantom Perspective

STUDENT LIST DATA | ADJUSTING FOR AUTOMATION

A new report from Fordham CLIP sheds light on the market for student list data from higher education institutions

From the paper authored by N. CAMERON RUSSELL, JOEL R. REIDENBERG, ELIZABETH MARTIN, and THOMAS NORTON of the FORDHAM CENTER ON LAW AND INFORMATION POLICY: 

“Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

This information is being collected, marketed, and sold about individuals because they are students."

Drawing from publicly-available sources, public records requests from educational institutions, and marketing materials sent to high school students gathered over several years, the study paints an unsettling portrait of the murky market for student list data, and makes recommendations for regulatory response: 

  1. The commercial marketplace for student information should not be a subterranean market. Parents, students, and the general public should be able to reasonably know (i) the identities of student data brokers, (ii) what lists and selects they are selling, and (iii) where the data for student lists and selects derives. A model like the Fair Credit Reporting Act (FCRA) should apply to compilation, sale, and use of student data once outside of schools and FERPA protections. If data brokers are selling information on students based on stereotypes, this should be transparent and subject to parental and public scrutiny.
  2. Brokers of student data should be required to follow reasonable procedures to assure maximum possible accuracy of student data. Parents and emancipated students should be able to gain access to their student data and correct inaccuracies. Student data brokers should be obligated to notify purchasers and other downstream users when previously-transferred data is proven inaccurate and these data recipients should be required to correct the inaccuracy.
  3. Parents and emancipated students should be able to opt out of uses of student data for commercial purposes unrelated to education or military recruitment.
  4. When surveys are administered to students through schools, data practices should be transparent, students and families should be informed as to any commercial purposes of surveys before they are administered, and there should be compliance with other obligations under the Protection of Pupil Rights Amendment (PPRA)."
⤷ Full Article