Ben Weidmann and Luke Miratrix. 2020. “
Lurking inferential monsters? Quantifying selection bias in evaluations of school programs.” Journal of Policy Analysis and Management.
Publisher's VersionAbstractThis study examines whether unobserved factors substantially bias education evaluations that rely on the Conditional Independence Assumption. We add 14 new within‐study comparisons to the literature, all from primary schools in England. Across these 14 studies, we generate 42 estimates of selection bias using a simple approach to observational analysis. A meta‐analysis of these estimates suggests that the distribution of underlying bias is centered around zero. The mean absolute value of estimated bias is 0.03σ, and none of the 42 estimates are larger than 0.11σ. Results are similar for math, reading, and writing outcomes. Overall, we find no evidence of substantial selection bias due to unobserved characteristics. These findings may not generalize easily to other settings or to more radical educational interventions, but they do suggest that non‐experimental approaches could play a greater role than they currently do in generating reliable causal evidence for school education.
R. Mozer, L. Miratrix, A.R. Kaufman, and L.J. Anastasopoulous. 2020. “
Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality.” Political Analysis, 28, 4.
Publisher's VersionAbstractMatching for causal inference is a well-studied problem, but standard methods fail when the units to match are text documents: the high-dimensional and rich nature of the data renders exact matching infeasible, causes propensity scores to produce incomparable matches, and makes assessing match quality difficult. In this paper, we characterize a framework for matching text documents that decomposes existing methods into (1) the choice of text representation and (2) the choice of distance metric. We investigate how different choices within this framework affect both the quantity and quality of matches identified through a systematic multifactor evaluation experiment using human subjects. Altogether, we evaluate over 100 unique text-matching methods along with 5 comparison methods taken from the literature. Our experimental results identify methods that generate matches with higher subjective match quality than current state-of-the-art techniques. We enhance the precision of these results by developing a predictive model to estimate the match quality of pairs of text documents as a function of our various distance scores. This model, which we find successfully mimics human judgment, also allows for approximate and unsupervised evaluation of new procedures in our context. We then employ the identified best method to illustrate the utility of text matching in two applications. First, we engage with a substantive debate in the study of media bias by using text matching to control for topic selection when comparing news articles from thirteen news sources. We then show how conditioning on text data leads to more precise causal inferences in an observational study examining the effects of a medical intervention.