Approximately valid probabilistic inference on a class of statistical functionals
Existing frameworks for probabilistic inference assume the inferential target is the posited statistical model's parameter. In machine learning applications, however, often there is no statistical model, so the quantity of interest is not a model parameter but a statistical functional. In this paper, we develop a generalized inferential model framework for cases when this functional is a risk minimizer or solution to an estimating equation. We construct a data-dependent possibility measure for uncertainty quantification and inference whose computation is based on the bootstrap. We then prove that this new generalized inferential model provides approximately valid inference in the sense that the plausibility values assigned to hypotheses about the unknowns are asymptotically well-calibrated in a frequentist sense. Among other things, this implies that confidence regions for the underlying functional derived from our new generalized inferential model are approximately valid. The method is shown to perform well in classical examples, including quantile regression, and in a personalized medicine application.
READ FULL TEXT