Abstract
Social science has developed an expansive design-based toolkit for causal inference, but the assumptions that undergird standard approaches often fail in applied settings. Recent developments in automated partial identification offer an alternative approach that allows researchers to learn as much as possible in these imperfect settings by easily bounding unidentified quantities of interest while transparently acknowledging limitations of data and design. In this paper, we develop a number of new techniques within this framework, including new approaches for uncertainty quantification, covariate adjustment, extensions to continuous variables, and approaches for interpreting why bounds are narrow or wide. We then replicate and extend published studies covering a range of causal research designs to demonstrate how this approach provides a deeper understanding of the robustness of empirical results, even allowing key assumptions to be falsified in some cases. Finally, we use this approach to update findings in the literature on racial bias in policing, demonstrating the substantive contributions that are possible with this new technology.