↳ Methodology

February 3rd, 2020

↳ Methodology

Winter Night

VARIABLE DEPEDENCE

Debating the merits of large- and small-N studies

Sample size does more than determine the sort of methodology appropriate for a given study; theorists of social science have long pointed out that the number of case studies considered determines the sorts of questions researchers can analyze and the structure of their causal claims.

A 2003 paper by PETER HALL takes these debates further. In the context of comparative political science, Hall argues that the sort of methods researchers use should be consistent with their beliefs about the nature of historical development. From the paper:

"Ontology is crucial to methodology because the appropriateness of a particular set of methods for a given problem turns on assumptions about the nature of the causal relations they are meant to discover. It makes little sense to apply methods designed to establish the presence of functional relationships, for instance, if we confront a world in which causal relationships are not functional. To be valid, the methodologies used in a field must be congruent with its prevailing ontologies. There has been a postwar trend in comparative politics toward statistical methods, based preeminently on the standard regression model. Over the same period, the ontologies of the field have moved in a different direction: toward theories, such as those based on path dependence or strategic interaction, whose conceptions of the causal structures underlying outcomes are at odds with the assumptions required for standard regression techniques.

The types of regression analyses commonly used to study comparative politics provide valid support for causal inferences only if the causal relations they are examining meet a rigorous set of assumptions. In general, this method assumes unit homogeneity, which is to say that, other things being equal, a change in the value of a causal variable x will produce a corresponding change in the value of the outcome variable y of the same magnitude across all the cases. It assumes no systematic correlation between the causal variables included in the analysis and other causal variables. And most regression analyses assume that there is no reciprocal causation, that is, that the causal variables are unaffected by the dependent variable. The problem is that the world may not have this causal structure.

Small-N comparison is therefore far more useful for assessing causal theories than conventional understandings of the 'comparative method' imply. Precisely because such research designs cover small numbers of cases, the researcher can investigate causal processes in each of them in detail, thereby assessing the relevant theories against especially diverse kinds of observations. Reconceptualized in these terms, the comparative method emerges not as a poor substitute for statistical analysis, but as a distinctive approach that offers a much richer set of observations, especially about causal processes, than statistical analyses normally allow."

Link to the piece.

  • "Except for probabilistic situations that approach 1 or 0 (in other words, those that are almost deterministic), studies based on a small number of cases have difficulty in evaluating probabilistic theories." Stanley Lieberson's 1991 overview of the causal assumptions inherent to small-N studies. Link.
  • Theda Skocpol and Margaret Somers on "The Uses of Comparative History in Macrosocial Inquiry." Link.
  • Jean Lachapelle, Lucan A. Way, and Steven Levitsky use small-N process tracing to "examine the role of the coercive apparatus in responding to crises triggered by mass anti-regime protest in Iran and Egypt." Link. Andrey V. Korotayev, Leonid M. Issaev, Sergey Yu. Malkov and Alisa R. Shishkina present a quantitative analysis of destabilization factors in 19 countries during the Arab Spring. Link.
⤷ Full Article

August 26th, 2019

Summer in Brabant

INTEMPERATE OBJECTIVITY

On the pressures of policy-relevant climate science

Without any “evidence of fraud, malfeasance or deliberate deception or manipulation,” or any promotion of inaccurate views, how can bias enter a scientific assessment? In their new book, Discerning Experts, Michael Oppenheimer, Naomi Oreskes, Dale Jamieson, et al explore the pattern of underestimation of the true consequences of climate change.

Climate change's impacts are uncertain; predictions about climate change are difficult to make. Taking an ethnographic approach, Discerning Experts shows how those difficulties, coupled with the nature of the public discourse, and the pressures that come when research is going to be discussed and used in policy, have tilted climate assessment optimistic and cautious.

In a summary of their book, Oreskes et al explain three reasons for the tilt:

“The combination of … three factors—the push for univocality, the belief that conservatism is socially and politically protective, and the reluctance to make estimates at all when the available data are contradictory—can lead to ‘least common denominator' results—minimalist conclusions that are weak or incomplete.”

These tendencies, according to the authors, pertain to the applied research context. The academic context is different: “The reward structure of academic life leans toward criticism and dissent; the demands of assessment push toward agreement.” Link to a summary essay in Scientific American. Link to the book.

  • In an interview, Michael Oppenheimer elaborates on other elements that skew the assessments: the selection of authors, the presentation of the resulting information, and others. Link.
  • In a review of the book, Gary Yohe reflects on his own experience working on major climate assessments, such the IPCC’s. Link.
  • A David Roberts post from 2018 finds another case of overly cautious climate science: models of the economic effects of climate change may be much more moderate than models of the physical effects. To remedy this, “We need models that negatively weigh uncertainty, properly account for tipping points, incorporate more robust and current technology cost data, better differentiate sectors outside electricity, rigorously price energy efficiency, and include the social and health benefits of decarbonization.” Link.
  • Tangentially related: carbon tax or green investment? It’s worth considering not just all possible policy options but also their optimal interactions. A paper by Julie Rozenberg, Adrien Vogt-Schilb, and Stephane Hallegatte concludes, “Optimal carbon price minimizes the discounted social cost of the transition to clean capital, but imposes immediate private costs that disproportionately affect the current owners of polluting capital, in particular in the form of stranded assets.” Link to a summary which contains a link to the unpaywalled paper.
⤷ Full Article

July 22nd, 2019

...Höhere Wesen befehlen

PHENOMENAL WORLD

Blog highlights

At the Phenomenal World, we have been publishing pieces covering a wide-range of topics, many of which are common ground in this newsletter. Below, in no particular order, is a round-up of some recent work in case you missed it.

Be on the lookout for upcoming posts over the next months—including work on counterfactual fairness by Lily Hu; an interview with scholar Destin Jenkins on race and municipal finance; an examination of the philosophy of Neyman-Pearson testing by Cosmo Grant; and a piece on UBI in the 1970s by Nikita Shepard—and subscribe to the Phenomenal World newsletter to get new posts directly in your inbox.

As always, thank you for reading.

  • Max Kasy discusses the standard of social science experimentation—randomized controlled trials—and proposes, in a new working paper with his colleague Anja Sautmann, a new method for designing experiments that lead to the optimal policy choice. Link.
  • Amanda Page-Hoongrajok reviews James Crotty's new book, Keynes Against Capitalism. Page-Hoongrajok discusses Keynes's thought, Crotty's interventions, and the relevance of these discussions for the current macroeconomic environment. Link.
  • Owen Davis surveys the monopsony literature, dispelling some persistent misunderstandings and clarifying its significance for the state of current economics research. Link.
  • Maya Adereth interviews the legendary and influential political scientist Adam Przeworski. In an expansive conversation, Przeworski discusses his intellectual trajectory, his experience and observations around Allende's government in Chile, the neoliberal turn, and the future of popular politics. Link.
  • Greg Keenan examines the history of copyright formalities in the United States and Europe, arguing that the frequently derided US copyright regime is, in fact, well suited for the digital age. Link.
  • Hana Beach interviews basic income scholar Almaz Zelleke on the neglected history of feminist welfare rights activists's campaigns for unconditional cash transfers, the complex relationship between advocacy and policy, and the current drive towards UBI. Link.
⤷ Full Article

July 8th, 2019

Model of a Cabin

SELECTED MOBILITY

Examining the college premium

Higher education is widely understood to be a major driver of intergenerational mobility in the United States. Despite the clear (and growing) inequalities between and within colleges, it remains the case that higher education reduces the impact that parental class position has on a graduate's life outcomes.

In an intriguing paper, associate professor of economics at Harvard XIANG ZHOU scrutinizes the implied causal relationship between college completion and intergenerational mobility. Specifically, Zhou uses a novel weighting method "to directly examine whether and to what extent a college degree moderates the influence of parental income" outside of selection effects, seeking to distinguish between the "equalization" and "selection" hypotheses of higher ed's impact on intergenerational mobility.

From the paper:

"Three decades have passed since Hout’s (1988) discovery that intergenerational mobility is higher among college graduates than among people with lower levels of education. In light of this finding, many researchers have portrayed a college degree as 'the great equalizer' that levels the playing field, and hypothesized that an expansion in postsecondary education could promote mobility because more people would benefit from the high mobility experienced by college graduates. Yet this line of reasoning rests on the implicit assumption that the 'college premium' in intergenerational mobility reflects a genuine 'meritocratic' effect of postsecondary education, an assumption that has rarely, if ever, been rigorously tested.

In fact, to the extent that college graduates from low and moderate-income families are more selected on such individual attributes as ability and motivation than those from high-income families, the high mobility observed among bachelor’s degree holders may simply reflect varying degrees of selectivity of college graduates from different family backgrounds."

In sum, Zhou finds that the "selection" hypothesis carries more weight than the "equalization" hypothesis. One implication of this finding is that "simply expanding the pool of college graduates is unlikely to boost intergenerational income mobility in the US." Link to the paper.

  • A 2011 paper by Michael Bastedo and Ozan Jaquette looks at the stratification dynamics affecting low-income students within higher ed. Link. A paper from the same year by Martha Bailey and Susan Dynarski surveys the state of inequality in postsecondary education. Link.
  • An op-ed by E. Tammy Kim in the Times argues for higher-education as a public good. Link.
  • Marshall Steinbaum and Julie Margetta Morgan's 2018 paper examines the student debt crisis in the broader context of labor market trends: "Reliance on the college earnings premium [as a measure of success] is that it focuses primarily on the individual benefit of educational attainment, implying that college is worthwhile as long as individuals are making more than they would have otherwise. But in the context of public investment in higher education, we need to know not only how individuals are faring but also how investments in higher education are affecting our workforce and the economy as a whole." Link.
⤷ Full Article

July 1st, 2019

Quandary

HOW RESEARCH AFFECTS POLICY

Results from Brazil

How can evidence inform the decisions of policymakers? What value do policymakers ascribe to academic research? In January, we highlighted Yale's Evidence in Practice project, which emphasizes the divergence between policymakers' needs and researchers' goals. Other work describes the complexity of getting evidence into policy. A new study by JONAS HJORT, DIANA MOREIRA, GAUTAM RAO, and JUAN FRANCISCO SANTINI surprises because of the simplicity of its results—policymakers in Brazilian cities and towns are willing to pay for evidence, and willing to implement (a low-cost, letter-mailing) evidence-based policy. The lack of uptake may stem more from a lack of information than a lack of interest: "Our findings make clear that it is not the case, for example, that counterfactual policies' effectiveness is widely known 'on the ground,' nor that political leaders are uninterested in, unconvinced by, or unable to act on new research information."

From the abstract:

"In one experiment, we find that mayors and other municipal officials are willing to pay to learn the results of impact evaluations, and update their beliefs when informed of the findings. They value larger-sample studies more, while not distinguishing on average between studies conducted in rich and poor countries. In a second experiment, we find that informing mayors about research on a simple and effective policy (reminder letters for taxpayers) increases the probability that their municipality implements the policy by 10 percentage points. In sum, we provide direct evidence that providing research information to political leaders can lead to policy change. Information frictions may thus help explain failures to adopt effective policies."

Link to the paper.

  • New work from Larry Orr et al addresses the question of how to take evidence from one place (or several places) and make it useful to another. "[We provide] the first empirical evidence of the ability to use multisite evaluations to predict impacts in individual localities—i.e., the ability of 'evidence‐based policy' to improve local policy." Link.
  • Cited within the Hjort et al paper is research from Eva Vivalt and Aidan Coville on how policymakers update their prior beliefs when presented with new evidence. "We find evidence of 'variance neglect,' a bias similar to extension neglect in which confidence intervals are ignored. We also find evidence of asymmetric updating on good news relative to one’s prior beliefs. Together, these results mean that policymakers might be biased towards those interventions with a greater dispersion of results." Link.
  • From David Evans at CGDev: "'The fact that giving people information does not, by itself, change how they act is one of the most firmly established in social science.' So stated a recent op-ed in the Washington Post. That’s not true. Here are ten examples where simply providing information changed behavior." Link. ht The Weekly faiV.
  • For another iteration of the question of translating evidence into policy, see our February letter on randomized controlled trials. Link.
⤷ Full Article