Doing Great at Estimating CATE? On the Neglected Assumptions in Benchmark Comparisons of Treatment Effect Estimators

07/28/2021
by   Alicia Curth, et al.
6

The machine learning toolbox for estimation of heterogeneous treatment effects from observational data is expanding rapidly, yet many of its algorithms have been evaluated only on a very limited set of semi-synthetic benchmark datasets. In this paper, we show that even in arguably the simplest setting – estimation under ignorability assumptions – the results of such empirical evaluations can be misleading if (i) the assumptions underlying the data-generating mechanisms in benchmark datasets and (ii) their interplay with baseline algorithms are inadequately discussed. We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators – the IHDP and ACIC2016 datasets – in detail. We identify problems with their current use and highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others – a fact that is rarely acknowledged but of immense relevance for interpretation of empirical results. We close by discussing implications and possible next steps.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset