Evaluating evidence-based policy
Over the past two decades, "evidence-based policy" has come to define the common sense of research and policymakers around the world. But while attempts have been made to create formalization schemes for the ranking of evidence for policy, a gulf remains between rhetoric about evidence-based policy and applied theories for its development.
In a 2011 paper, philosophers of science NANCY CARTWRIGHT and JACOB STEGENGA lay out a "theory of evidence for use," discussing the role of causal counterfactuals, INUS conditions, and mechanisms in producing evidence—and how all this matters for its evaluators.
From the paper:
"Truth is a good thing. But it doesn’t take one very far. Suppose we have at our disposal the entire encyclopaedia of unified science containing all the true claims there are. Which facts from the encyclopaedia do we bring to the table for policy deliberation? Among all the true facts, we want on the table as evidence only those that are relevant to the policy. And given a collection of relevant true facts we want to know how to assess whether the policy will be effective in light of them. How are we supposed to make these decisions? That is the problem from the user’s point of view and that is the problem of focus here.
We propose three principles. First, policy effectiveness claims are really causal counterfactuals and the proper evaluation of a causal counterfactual requires a causal model that (i) lays out the causes that will operate and (ii) tells what they produce in combination. Second, causes are INUS conditions, so it is important to review both the different causal complexes that will affect the result (the different pies) and the different components (slices) that are necessary to act together within each complex (or pie) if the targeted result is to be achieved. Third, a good answer to the question ‘How will the policy variable produce the effect’ can help elicit the set of auxiliary factors that must be in place along with the policy variable if the policy variable is to operate successfully."
Link to the paper.
- Cartwright has written extensively on evidence and its uses. See: her 2012 book Evidence Based Policy: A Practical Guide to Doing it Better; her 2011 paper in The Lancet on RCTs and effectiveness; and her 2016 co-authored monograph on child safety, featuring applications of the above reasoning.
- For further introduction to the philosophical underpinnings of Cartwright's applied work, and the relationship between theories of causality and evidence, see her 2015 paper "Single Case Causes: What is Evidence and Why." Link. And also: "Causal claims: warranting them and using them." Link.
- Obliquely related, see this illuminating discussion of causality in the context of reasoning about discrimination in machine learning and the law, by JFI fellow and Harvard PhD Candidate Lily Hu and Yale Law School Professor Issa Kohler-Hausmann: "What's Sex Got To Do With Machine Learning?" Link.
- A 2017 paper by Abhijit Banerjee et al: "A Theory of Experimenters," which models "experimenters as ambiguity-averse decision-makers, who make trade-offs between subjective expected performance and robustness. This framework accounts for experimenters' preference for randomization, and clarifies the circumstances in which randomization is optimal: when the available sample size is large enough or robustness is an important concern." Link.