➔ Phenomenal World

October 22nd, 2018

➔ Phenomenal World

About the Phenomenal World

The Phenomenal World is a new publication that publishes research, analysis, and commentary on applied social science. We chose this name for our blog because we hope to publish work that addresses the social world in all its apparent complexity.

Our contributors are economists, philosophers, social scientists, data scientists, and policy researchers. You’ll find posts on metaresearch; basic income, welfare and the commonwealth; digital ethics; education; economic history; and evolving institutions. We also post our weekly newsletter, a roundup of recommended reading from across the social sciences. Posts are wide-ranging in subject matter, length, and style.

The Phenomenal World is managed by staff of the Jain Family Institute, an applied research organization that works to bring just research and policy from theoretical conception to actual implementation in society. We welcome submissions. Please see our About page for more information on submitting, and for the sign-up form for our newsletter.

Thank you for reading.

⤷ Full Article

February 16th, 2019

Cup and Ring

UBI SUPPORT ACROSS COUNTRIES | MONITORING ICE CAPS | FOOD AND CITIZENSHIP

GAP PROGRESSION

New life in the debates over overy measurement

In recent weeks, a familiar debate over how we understand the global poverty rate across time reappeared in mainstream op-ed pages. Sparked initially by Bill Gates tweeting out an infographic produced by Our World in Data—which visualizes massive decreases (94% to 10% of people) in global poverty over the past two-hundred years—the notable discussants have been LSE anthropologist JASON HICKEL and Our World in Data researchers JOE HASELL and MAX ROSER.

Hickel published a polemical Guardian op-ed criticizing the publication of this chart, which, he argued, misrepresents the history it claims to communicate and relies on contestable and imprecise data sources to bolster its universal progress narrative, taking "the violence of colonisation and repackaging it as a happy story of progress." Theresponses were numerous.

Among them, a post by Hasell and Roser provided detailed descriptions of the methods and data behind their work to answer the following: "How do we do know that the vast majority of the world population lived in extreme poverty just two centuries ago as this chart indicates? And how do we know that this account of falling global extreme poverty is in fact true?"

In addition to methodological arguments regarding data sources and the poverty line, Hickel's argument emphasizes the gap between poverty and the capacity to eliminate it:

"What matters, rather, is the extent of global poverty vis-à-vis our capacity to end it. As I have pointed out before, our capacity to end poverty (e.g., the cost of ending poverty as a proportion of the income of the non-poor) has increased many times faster than the proportional poverty rate has decreased. By this metric we are doing worse than ever before. Indeed, our civilization is regressing. On our existing trajectory, according to research published in the World Economic Review, it will take more than 100 years to end poverty at $1.90/day, and over 200 years to end it at $7.4/day. Let that sink in. And to get there with the existing system—in other words, without a fairer distribution of income—we will have to grow the global economy to 175 times its present size. Even if such an outlandish feat were possible, it would drive climate change and ecological breakdown to the point of undermining any gains against poverty.

It doesn’t have to be this way, of course."

Link to that post, and link to a subsequent one, which responds directly to the methods and data-use questions addressed by Hasell and Roser.

⤷ Full Article

February 9th, 2019

Overboard

UBI IN CHICAGO | FOUR-DAY WORK WEEK | RELIGIOUS CONVERSION AND SES

HETEROGENOUS GAP

In search of a more just model for higher education financing

This week, we delve into the persisting inequalities of our higher education system. Since Winston, Hill, and Boyd found that only 10% of students at elite universities came from families who fell within the bottom 40% of the income distribution in 2005, universities across the board have revived efforts to diversify their student bodies.

The idea that there's a need for greater socioeconomic diversity in higher education is largely uncontroversial, particularly amid growing evidence of the higher earnings potential for college graduates. However, the policies best suited to addressing this gap see far less consensus. ROSINGER, BELASCO, and HEARN in the Journal for Higher Education examine the impact of both means tested and universal policies that replace student loans with grants in financial aid packages. The impact of these policies on socioeconomic diversity is somewhat counterintuitive:

"We found that colleges offering universal discounts experienced increased enrollment among middle-class students. Our study indicates universal no-loan policies represent one strategy to increase access and affordability for the middle-class in the elite reaches of higher education. The study also, however, raises questions about the policies’ effectiveness in addressing access for low-income students and efficiency in targeting aid."

Link to the full paper.

  • For more on the potential for universities to facilitate both the entrenchment and supersession of generational inequalities, see the groundbreaking 2017 paper by Chetty et. al. The authors used fourteen years of federal income tax data to construct mobility report cards of nearly 2000 colleges, provoking a range of new literature in the field. Their findings: "The colleges that have the highest bottom-to-top-quintile mobility rates – i.e., those that offer both high success rates and low-income access – are typically mid-tier public institutions. For instance, many campuses of the City University of New York (CUNY), certain California State colleges, and several campuses in the University of Texas system have mobility rates above 6%… Elite private (Ivy-Plus) colleges have an average mobility rate of 2.2%." Link to the paper, as well as the digitization of its results, courtesy of the New York Times.
  • Drawing on "Mobility Report Cards," a recent paper by Bloome, Dyer, and Zhou finds that parental income has become less predictive of adult income, offsetting inter-generational income persistence resulting from education. Link.
  • Anna Manzoni and Jessi Streib find that wage gaps between first- and continuing-generation college students are not caused by the institutions they attend, the grades they earn, or the subjects they study: "Our decomposition analysis shows that the uneven distribution of students into labor market sectors, occupations, hours worked, and urban locations is more responsible for the wage gap than the distribution of students into and within educational institutions." Link.
  • A book on the trials and tribulations of building and maintaining the "Harvard of the proletariat": Anthony Picciano and Chet Jordan on the history of the CUNY system. Link.
⤷ Full Article

February 2nd, 2019

The Summit

A YEAR IN UBI | MEASURING UNIVERSITIES | MEDIEVAL BEESWAX

LEGITIMATE ASSESSMENT

Moving beyond computational questions in digital ethics research

In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.

A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:

“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.

In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.

Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy"

⤷ Full Article

January 26th, 2019

Bone Mobile

LOCAL CRIME | POLARIZATION | USELESS JOBS

LONG RUNNING

Learning about long-term effects of interventions, and designing interventions to facilitate the long view

A new paper from the Center for Effective Global Action at Berkeley surveys a topic important to our researchers here at JFI: the question of long-run effects of interventions. In our literature review of cash transfer studies, we identified the need for more work beyond the bounds of a short-term randomized controlled trial. This is especially crucial for basic income, which is among those policies intended to be permanent.

The authors of the new Berkeley report, Adrien Bouguen, Yue Huang, Michael Kremer, and Edward Miguel, note that it’s a particularly apt moment for this kind of work: “Given the large numbers of RCTs launched in the 2000’s, every year that goes by means that more and more RCT studies are ‘aging into’ a phase where the assessment of long-run impacts becomes possible.”

The report includes a summary of what we know about long-run impacts so far:

"Section 2 summarizes and evaluates the growing body of evidence from RCTs on the long-term impacts of international development interventions, and find most (though not all) provide evidence for positive and meaningful effects on individual economic productivity and living standards. Most of these studies examine existing cash transfer, child health, or education interventions, and shed light on important theoretical questions such as the existence of poverty traps (Bandiera et al., 2018) and returns to human capital investments in the long term."

Also notable is the last section, which contains considerations for study design, "lessons from our experience in conducting long-term tracking studies, as well as innovative data approaches." Link to the full paper.

  • In his paper "When are Cash Transfers Transformative?," Bruce Wydick also notes the need for long-run analysis: "Whether or not these positive impacts have long-term transformative effects—and under what conditions—is a question that is less settled and remains an active subject of research." The rest of the paper is of interest as well, including Wydick's five factors that tend to signal that a cash transfer will be transformative. Link.
  • For more on the rising popularity of RCTs, a 2016 paper by major RCT influencers Banerjee, Duflo, and Kremer quantifies that growth and discusses the impact of RCTs. Link. Here’s the PowerPoint version of that paper. David McKenzie at the World Bank responds to the paper, disputing some of its claims. Link.
⤷ Full Article

January 19th, 2019

Self-Portrait

SOVEREIGN PERMANENCE | MINIMUM WAGE RESEARCH | AI EDUCATION

PLENARY CONFIDENCE

A look at China's social credit system

In a recent newsletter, we noted a spate of reporting drawing attention to the authoritarianism of China's growing Social Credit System. This week, we are sharing a paper by YU-JIE CHEN, CHING-FU LIN, AND HAN-WEI LIU that casts light on the details of the program's workings, corrects common misconceptions, proposes some likely and disturbing future scenarios, and offers a useful frame for understanding the significant shift it is bringing about in Chinese governance.

"A new mode of governance is emerging with the rise of China’s 'Social Credit System' (shehui xinyong zhidu) or SCS. The SCS is an unusual, comprehensive governance regime designed to tackle challenges that are commonly seen as a result of China’s 'trustless' society that has featured official corruption, business scandals and other fraudulent activities. The operation of the SCS relies on a number of important and distinctive features—information gathering, information sharing, labeling, and credit sanctions—which together constitute four essential elements of the system.

In our view, the regime of the SCS reflects what we call the 'rule of trust,' which has significant implications for the legal system and social control in China. We define the 'rule of trust' as a governance mode that imposes arbitrary restrictions—loosely defined and broadly interpreted trust-related rules—to condition, shape, and compel the behavior of the governed subjects… The 'rule of trust' is in fact undermining 'rule of law.'

In the context of governance, the unbounded notion of 'trust' and the unrestrained development of technology are a dangerous combination."

Link to the paper.

⤷ Full Article

January 12th, 2019

Worldviews

SOFT CYBER

Another kind of cybersecurity risk: the destruction of common knowledge

In a report for the Berkman Klein center, Henry Farrell and Bruce Schneier identify a gap in current approaches to cybersecurity. National cybersecurity officials still base their thinking on Cold War-type threats, where technologists focus on hackers. Combining both approaches, Farrell and Schneier make a wider argument about collective knowledge in democratic systems—and the dangers of its diminishment.

From the abstract:

"We demonstrate systematic differences between how autocracies and democracies work as information systems, because they rely on different mixes of common and contested political knowledge. Stable autocracies will have common knowledge over who is in charge and their associated ideological or policy goals, but will generate contested knowledge over who the various political actors in society are, and how they might form coalitions and gain public support, so as to make it more difficult for coalitions to displace the regime. Stable democracies will have contested knowledge over who is in charge, but common knowledge over who the political actors are, and how they may form coalitions and gain public support... democracies are vulnerable to measures that 'flood' public debate and disrupt shared decentralized understandings of actors and coalitions, in ways that autocracies are not."

One compelling metaresearch point from the paper is that autocratic governments receive analysis of information trade-offs, while democratic governments do not:

"There is existing research literature on the informational trade-offs or 'dictators' dilemmas' that autocrats face, in seeking to balance between their own need for useful information and economic growth, and the risk that others can use available information to undermine their rule. There is no corresponding literature on the informational trade-offs that democracies face between desiderata like availability and stability."

Full paper available on SSRN here.

  • Farrell summarizes the work on Crooked Timber: "In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society." Many substantive comments follow. Link.
  • Jeremy Wallace, an expert on authoritarianism, weighs in on Twitter: "Insiders, inevitably, have even more information about the contours of these debates. On the other hand, there's a lot that dictators don't know--about their own regimes, the threats that they are facing, etc." Link to Wallace's work on the topic.
  • Related reading recommended by Wallace, from Daniel Little, a 2016 paper on propaganda: "Surprisingly, the government tends to pick a high level of propaganda precisely when it is ineffective." Link.
⤷ Full Article

January 5th, 2019

Aunt Eliza

PROCESS INTEGRATION

Bringing evidence to bear on policy

Happy 2019. We’re beginning with a report from Evidence in Practice, a project from the Yale School of Management. The report focuses on how to integrate rigorously researched evidence with policy and practice, with an emphasis on international development. The needs numerous stakeholders involved in research and policymaking are enumerated, along with their own needs and priorities: funders, researchers, intermediaries, policymakers, and implementers each receive consideration. One of the strengths of the report is its quotations from dozens of interviews across these groups, which give a sense of the messy, at times frustrating, always collaborative business of effecting change in the world. As to the question of what works:

"The most successful examples of evidence integration lessen the distinction between evidence generation and application, and focus on designing approaches that simultaneously generate (different types of) rigorous evidence and develop an iterative process for integrating evidence into practice. These projects turn the need to negotiate evidence generation and integration into an asset rather than a roadblock. In that sense, the best examples of evidence integration resulted from programs with robust, explicit learning and evidence sharing agendas. This commitment to learning opens the door for different types of linkages and information flows across stakeholders to share experiences, perspectives, and insights with the explicit (and non-threatening) goal of learning."

Another key point is that academic researchers and implementers have different definitions of evidence: Academics have a "tendency to think of evidence as abstract, 'universal' knowledge, while implementers have learned that knowledge is always and necessarily enacted and situated in practice, where few universal principles seem to hold across multiple complex contexts."

Full report, by Rodrigo Canales et al, here.

  • In October, Ruth Mayne, Duncan Green, Irene Guijt, Martin Walsh, Richard English & Paul Cairney published a paper detailing Oxfam's experience with promoting research-uptake in the policy sphere: "Academic studies of the politics of evidence-based policymaking suggest that policymaking can never be 'evidence based' (Cairney, 2016). At best, it is evidence-informed." Link.

  • At the World Bank’s Development Impact Blog, David Evans summarized the Oxfam paper into eight key points, including: "Great research is informed by engaging with people outside of our academic circles. We learn from people and policymakers (and people in other disciplines) what big new/unsolved problems are out there, and how institutions (formal and informal) really work." Link. ht Tim Ogden

⤷ Full Article

December 22nd, 2018

Frohes Fest | Year in Review

The JFI Letter has grown and morphed over the past twelve months; thank you to our readers for opening, skimming, clicking, and writing us every week. We'll be offline until January 5. In the meantime, here's a list of our favorite spotlights from last year and a list of favorite researchers to watch in the next.

⤷ Full Article

December 15th, 2018

Space Dance

SCIENTIFIC RETURNS

A new book examines the economic and social impacts of R&D

Last May, we highlighted a report on workforce training and technological competitiveness which outlined trends in research and development investment. The report found that despite "total U.S. R&D funding reaching an all-time high in 2015," it's shifted dramatically to the private sector: "federal funding for R&D, which goes overwhelmingly to basic scientific research, has declined steadily and is now at the lowest level since the early 1950s." This week, we take a look at the returns to these investments and discuss how best to measure and trace the ways research spending affects economic activity and policy.

In the most recent Issues in Science and Technology, IRWIN FELLER reviewsMeasuring the Economic Value of Research, a technical monograph that discusses how best to measure the impact and value of research on policy objectives. Notably, the book highlights UMETRICS, a unified dataset from a consortium of universities "that can be used to better inform decisions relating to the level, apportionment, human capital needs, and physical facility requirements of public investments in R&D and the returns of these investments." While it represents a big data approach to program evaluation, Feller notes that UMETRICS' strength is in the "small data, theory-driven, and exacting construction of its constituent datasets," all of which offer insight into the importance of human capital in successful R&D:

"The book’s characterization of the ways in which scientific ideas are transmitted to and constitute value to the broader economy encompasses publications and patents, but most importantly includes the employment of people trained in food safety research. This emphasis on human capital reflects a core proposition of UMETRICS, namely the 'importance of people—students, principal investigators, postdoctoral researchers, and research staff—who conduct research, create new knowledge, and transmit that knowledge into the broader economy.'

In particular, the chapters on workforce dynamics relating to employment, earnings, occupations, and early careers highlight the nuanced, disaggregated, and policy-relevant information made possible by UMETRICS. These data provide much-needed reinforcement to the historic proposition advanced by research-oriented universities that their major contribution to societal well-being—economic and beyond—is through the joint production of research and graduate education, more than patents or other metrics of technology transfer or firm formation."

The UMETRICS dataset traces the social and economic returns of research universities and allows for a larger examination of universities as sociopolitical anchors and scientific infrastructure.

⤷ Full Article