↳ Metaresearch

March 9th, 2020

↳ Metaresearch

Flanked by Two Dolphins

SYSTEM CIRCULATE

An ecosocial theory of disease

The correlation between health, income, and wealth is widely recognized in contemporary research and policy circles. This broadly social understanding of public health outcomes has its origins in a theoretical tradition dating back to the 1970s and 80s, in which scholars began to embed medical research within a political and economic framework.

In a 2001 paper, epidemiologist NANCY KRIEGER seeks to strengthen the theoretical foundations of epidemiological research by linking them back to biological study.

From the paper:

"If social epidemiologists are to gain clarity on causes of and barriers to reducing social inequalities in health, adequate theory is a necessity. Grappling with notions of causation raises issues of accountability and agency: simply invoking abstract notions like 'society' and disembodied 'genes' will not suffice. Instead, the central question becomes who and what is responsible for population patterns of health, disease, and well-being, as manifested in present, past and changing social inequalities in health?

Arising in part as a critique of proliferating theories that emphasize individuals' responsibility to choose healthy lifestyles, the political economy of health school explicitly addresses economic and political determinants of health and disease, including structural barriers to people living healthy lives. Yet, despite its invaluable contributions to identifying social determinants of population health, a political economy of health perspective affords few principles for investigating what these determinants are determining. I propose a theory that conceptualizes changing population patterns of health, disease and well-being in relation to each level of biological, ecological and social organization (e.g. cell, organ, organism/ individual, family, community, population, society, ecosystem). Unlike prior causal frameworks—whether of a triangle connecting 'host', 'agent' and 'environment', or a 'chain of causes' arrayed along a scale of biological organization, from 'society' to 'molecular and submolecular particles'—this framework is multidimensional and dynamic and allows us to elucidate population patterns of health, disease and well-being as biological expressions of social relations—potentially generating new knowledge and new grounds for action."

Link to the piece.

  • Krieger's 1994 article takes a closer look at epidemiological causal frameworks, questioning the adequacy of multiple causation. And her 2012 paper asks: "Who or what is a population?" and articulates the analytical significance of this definition for epidemiological research. Link and link.
  • "Disease epidemics are as much markers of modern civilization as they are threats to it." In NLR, Rob and Rodrick Wallace consider how the development of the global economy has altered the spread of epidemics, taking the 2014 Ebola outbreak as a case study. Link.
  • Samuel S. Myers and Jonathan A. Patz argue that climate change constitutes the "greatest public health challenge humanity has faced." Link.
  • A history of epidemics in the Roman Empire, from 27 BC – 476 AD, by Francois Relief and Louise Cilliers. Link. And a 1987 book by Ann Bowman Jannetta analyzes the impact of disease on institutional development in early modern Japan. Link.
⤷ Full Article

March 2nd, 2020

Honeysuckle

CLEAR MEANS

Evaluating evidence-based policy

Over the past two decades, "evidence-based policy" has come to define the common sense of research and policymakers around the world. But while attempts have been made to create formalization schemes for the ranking of evidence for policy, a gulf remains between rhetoric about evidence-based policy and applied theories for its development.

In a 2011 paper, philosophers of science NANCY CARTWRIGHT and JACOB STEGENGA lay out a "theory of evidence for use," discussing the role of causal counterfactuals, INUS conditions, and mechanisms in producing evidence—and how all this matters for its evaluators.

From the paper:

"Truth is a good thing. But it doesn’t take one very far. Suppose we have at our disposal the entire encyclopaedia of unified science containing all the true claims there are. Which facts from the encyclopaedia do we bring to the table for policy deliberation? Among all the true facts, we want on the table as evidence only those that are relevant to the policy. And given a collection of relevant true facts we want to know how to assess whether the policy will be effective in light of them. How are we supposed to make these decisions? That is the problem from the user’s point of view and that is the problem of focus here.

We propose three principles. First, policy effectiveness claims are really causal counterfactuals and the proper evaluation of a causal counterfactual requires a causal model that (i) lays out the causes that will operate and (ii) tells what they produce in combination. Second, causes are INUS conditions, so it is important to review both the different causal complexes that will affect the result (the different pies) and the different components (slices) that are necessary to act together within each complex (or pie) if the targeted result is to be achieved. Third, a good answer to the question ‘How will the policy variable produce the effect’ can help elicit the set of auxiliary factors that must be in place along with the policy variable if the policy variable is to operate successfully."

Link to the paper.

  • Cartwright has written extensively on evidence and its uses. See: her 2012 book Evidence Based Policy: A Practical Guide to Doing it Better; her 2011 paper in The Lancet on RCTs and effectiveness; and her 2016 co-authored monograph on child safety, featuring applications of the above reasoning.
  • For further introduction to the philosophical underpinnings of Cartwright's applied work, and the relationship between theories of causality and evidence, see her 2015 paper "Single Case Causes: What is Evidence and Why." Link. And also: "Causal claims: warranting them and using them." Link.
  • Obliquely related, see this illuminating discussion of causality in the context of reasoning about discrimination in machine learning and the law, by JFI fellow and Harvard PhD Candidate Lily Hu and Yale Law School Professor Issa Kohler-Hausmann: "What's Sex Got To Do With Machine Learning?" Link.
  • A 2017 paper by Abhijit Banerjee et al: "A Theory of Experimenters," which models "experimenters as ambiguity-averse decision-makers, who make trade-offs between subjective expected performance and robustness. This framework accounts for experimenters' preference for randomization, and clarifies the circumstances in which randomization is optimal: when the available sample size is large enough or robustness is an important concern." Link.
⤷ Full Article

November 14th, 2019

Phenomenal Works: Beth Popp Berman

On knowledge, institutions, and social policy

Editor's Note: This is the second post in a new series, Phenomenal Works, in which we invite our favorite researchers to share notable readings with us. We'll be publishing new editions every two weeks, around our regular output of interviews and analysis. Sign up for our newsletter to stay up to date with every post.

Beth Popp Berman is sociologist whose research focuses on the history of knowledge, organizations and public policy making. Her first book, Creating the Market University: How Academic Science Became an Economic Engine, examines the transformation of American academia from partially noncommercial institution to innovation-oriented entrepreneurial university. Popp Berman's forthcoming book, Thinking Like an Economist: How Economics Became the Language of U.S. Public Policy, charts how a style of economic reasoning pioneered among a small group of DoD technocrats became institutionalized at the core of the policy process—and its fundamental consequences for political decision-making.

Popp Berman's selections reflect the import of her own work, illuminating how and why certain forms of knowledge came to be produced, and how they are put to use in the construction of policy and institutions.

⤷ Full Article

March 28th, 2019

Experiments for Policy Choice

Randomized experiments have become part of the standard toolkit for policy evaluation, and are usually designed to give precise estimates of causal effects. But, in practice, their actual goal is to pick good policies. These two goals are not the same.

Is this the best way to go about things? Can we maybe make better policy choices, with smaller experimental budgets, by doing things a little differently? This is the question that Anja Sautmann and I address in our new work on “Adaptive experiments for policy choice.” If we wish to pick good policies, we should run experiments adaptively, shifting toward better policies over time. This gives us the highest chance to pick the best policy after the experiment has concluded.

⤷ Full Article

March 19th, 2019

Ideology in AP Economics

When the media talks about ideological indoctrination in education, it is usually assumed to refer to liberal arts professors pushing their liberal agenda. Less discussed is the very different strain of ideology found in economics. The normative import is harder to spot here, as economics presents itself as a science: it provides an empirical study of the economy, just as mechanical engineering provides an empirical study of certain physical structures. When economists offer advice on matters of policy, it’s taken to be normatively neutral expert testimony, on a par with the advice of engineers on bridge construction. However, tools from the philosophy of explanation, in particular the work of Alan Garfinkel, show how explanations that appear purely empirical can in fact carry significant normative assumptions.1 With this, we will uncover the ideology embedded in economics.

More specifically, we’ll look at the ideology embedded in the foundations of traditional economics—as found in a typical introductory micro-economics class. Economics as a whole is diverse and sprawling, such that no single ideology could possibly be attributed to the entire discipline, and many specialized fields avoid many of the criticisms I make here. Despite this, if there are ideological assumptions in standard introductory course, this is of great significance.

⤷ Full Article

January 26th, 2019

Bone Mobile

LONG RUNNING

Learning about long-term effects of interventions, and designing interventions to facilitate the long view

A new paper from the Center for Effective Global Action at Berkeley surveys a topic important to our researchers here at JFI: the question of long-run effects of interventions. In our literature review of cash transfer studies, we identified the need for more work beyond the bounds of a short-term randomized controlled trial. This is especially crucial for basic income, which is among those policies intended to be permanent.

The authors of the new Berkeley report, Adrien Bouguen, Yue Huang, Michael Kremer, and Edward Miguel, note that it’s a particularly apt moment for this kind of work: “Given the large numbers of RCTs launched in the 2000’s, every year that goes by means that more and more RCT studies are ‘aging into’ a phase where the assessment of long-run impacts becomes possible.”

The report includes a summary of what we know about long-run impacts so far:

"Section 2 summarizes and evaluates the growing body of evidence from RCTs on the long-term impacts of international development interventions, and find most (though not all) provide evidence for positive and meaningful effects on individual economic productivity and living standards. Most of these studies examine existing cash transfer, child health, or education interventions, and shed light on important theoretical questions such as the existence of poverty traps (Bandiera et al., 2018) and returns to human capital investments in the long term."

Also notable is the last section, which contains considerations for study design, "lessons from our experience in conducting long-term tracking studies, as well as innovative data approaches." Link to the full paper.

  • In his paper "When are Cash Transfers Transformative?," Bruce Wydick also notes the need for long-run analysis: "Whether or not these positive impacts have long-term transformative effects—and under what conditions—is a question that is less settled and remains an active subject of research." The rest of the paper is of interest as well, including Wydick's five factors that tend to signal that a cash transfer will be transformative. Link.
  • For more on the rising popularity of RCTs, a 2016 paper by major RCT influencers Banerjee, Duflo, and Kremer quantifies that growth and discusses the impact of RCTs. Link. Here’s the PowerPoint version of that paper. David McKenzie at the World Bank responds to the paper, disputing some of its claims. Link.
⤷ Full Article

January 12th, 2019

Worldviews

SOFT CYBER

Another kind of cybersecurity risk: the destruction of common knowledge

In a report for the Berkman Klein center, Henry Farrell and Bruce Schneier identify a gap in current approaches to cybersecurity. National cybersecurity officials still base their thinking on Cold War-type threats, where technologists focus on hackers. Combining both approaches, Farrell and Schneier make a wider argument about collective knowledge in democratic systems—and the dangers of its diminishment.

From the abstract:

"We demonstrate systematic differences between how autocracies and democracies work as information systems, because they rely on different mixes of common and contested political knowledge. Stable autocracies will have common knowledge over who is in charge and their associated ideological or policy goals, but will generate contested knowledge over who the various political actors in society are, and how they might form coalitions and gain public support, so as to make it more difficult for coalitions to displace the regime. Stable democracies will have contested knowledge over who is in charge, but common knowledge over who the political actors are, and how they may form coalitions and gain public support... democracies are vulnerable to measures that 'flood' public debate and disrupt shared decentralized understandings of actors and coalitions, in ways that autocracies are not."

One compelling metaresearch point from the paper is that autocratic governments receive analysis of information trade-offs, while democratic governments do not:

"There is existing research literature on the informational trade-offs or 'dictators' dilemmas' that autocrats face, in seeking to balance between their own need for useful information and economic growth, and the risk that others can use available information to undermine their rule. There is no corresponding literature on the informational trade-offs that democracies face between desiderata like availability and stability."

Full paper available on SSRN here.

  • Farrell summarizes the work on Crooked Timber: "In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society." Many substantive comments follow. Link.
  • Jeremy Wallace, an expert on authoritarianism, weighs in on Twitter: "Insiders, inevitably, have even more information about the contours of these debates. On the other hand, there's a lot that dictators don't know--about their own regimes, the threats that they are facing, etc." Link to Wallace's work on the topic.
  • Related reading recommended by Wallace, from Daniel Little, a 2016 paper on propaganda: "Surprisingly, the government tends to pick a high level of propaganda precisely when it is ineffective." Link.
⤷ Full Article

January 5th, 2019

Aunt Eliza

PROCESS INTEGRATION

Bringing evidence to bear on policy

Happy 2019. We’re beginning with a report from Evidence in Practice, a project from the Yale School of Management. The report focuses on how to integrate rigorously researched evidence with policy and practice, with an emphasis on international development. The needs numerous stakeholders involved in research and policymaking are enumerated, along with their own needs and priorities: funders, researchers, intermediaries, policymakers, and implementers each receive consideration. One of the strengths of the report is its quotations from dozens of interviews across these groups, which give a sense of the messy, at times frustrating, always collaborative business of effecting change in the world. As to the question of what works:

⤷ Full Article

December 15th, 2018

Space Dance

SCIENTIFIC RETURNS

A new book examines the economic and social impacts of R&D

Last May, we highlighted a report on workforce training and technological competitiveness which outlined trends in research and development investment. The report found that despite "total U.S. R&D funding reaching an all-time high in 2015," it's shifted dramatically to the private sector: "federal funding for R&D, which goes overwhelmingly to basic scientific research, has declined steadily and is now at the lowest level since the early 1950s." This week, we take a look at the returns to these investments and discuss how best to measure and trace the ways research spending affects economic activity and policy.

In the most recent Issues in Science and Technology, IRWIN FELLER reviewsMeasuring the Economic Value of Research, a technical monograph that discusses how best to measure the impact and value of research on policy objectives. Notably, the book highlights UMETRICS, a unified dataset from a consortium of universities "that can be used to better inform decisions relating to the level, apportionment, human capital needs, and physical facility requirements of public investments in R&D and the returns of these investments." While it represents a big data approach to program evaluation, Feller notes that UMETRICS' strength is in the "small data, theory-driven, and exacting construction of its constituent datasets," all of which offer insight into the importance of human capital in successful R&D:

"The book’s characterization of the ways in which scientific ideas are transmitted to and constitute value to the broader economy encompasses publications and patents, but most importantly includes the employment of people trained in food safety research. This emphasis on human capital reflects a core proposition of UMETRICS, namely the 'importance of people—students, principal investigators, postdoctoral researchers, and research staff—who conduct research, create new knowledge, and transmit that knowledge into the broader economy.'

In particular, the chapters on workforce dynamics relating to employment, earnings, occupations, and early careers highlight the nuanced, disaggregated, and policy-relevant information made possible by UMETRICS. These data provide much-needed reinforcement to the historic proposition advanced by research-oriented universities that their major contribution to societal well-being—economic and beyond—is through the joint production of research and graduate education, more than patents or other metrics of technology transfer or firm formation."

The UMETRICS dataset traces the social and economic returns of research universities and allows for a larger examination of universities as sociopolitical anchors and scientific infrastructure.

⤷ Full Article

November 17th, 2018

Poetry Machine

PLACE-BASED SUBSIDIES | UBERLAND | HISTORY OF QUANTIFICATION

STAGNANT INFLUENCE

The inefficiency of lobbying

A few weeks ago, we spotlighted work by Elliott Ash et. al. on the startling influence of the Manne economics seminars in shaping judicial decision-making. This week we’re looking at an industry that, conversely, seems extremely influential, but is frequently ineffectual: lobbying.

In a 2009 book, "Lobbying and Policy Change: Who Wins, Who Loses, and Why," FRANK R. BAUMGARTNER et. al. take an unprecedentedly thorough look at lobbying in Washington, scrutinizing "ninety-eight randomly selected policy issues in which interest groups were involved and then followed those issues across two Congresses." What they find is complexity and gridlock:

"Since we followed our issues for four years, we know a lot about what eventually occurred (if anything did). In fact, as we outline in the chapters to come, for the majority of our issues, little happened.

If what they are supposed to be doing is producing change, interest groups are a surprisingly ineffectual lot. A focus by the media and many academics on explaining political change or sensational examples of lobbying success obscures the fact that lobbyists often toil with little success in gaining attention to their causes or they meet such opposition to their efforts that the resulting battle leads to a stalemate.

Of course, many lobbyists are active because their organizations benefit from the status quo and they want to make sure that it stays in place. We will show that one of the best single predictors of success in the lobbying game is not how much money an organization has on its side, but simply whether it is attempting to protect the policy that is already in place."

Preview on Google Books here.

⤷ Full Article