Direct Gibbs posterior inference on risk minimizers: construction, concentration, and calibration

03/17/2022
by   Ryan Martin, et al.
0

Real-world problems, often couched as machine learning applications, involve quantities of interest that have real-world meaning, independent of any statistical model. To avoid potential model misspecification bias or over-complicating the problem formulation, a direct, model-free approach is desired. The traditional Bayesian framework relies on a model for the data-generating process so, apparently, the desired direct, model-free, posterior-probabilistic inference is out of reach. Fortunately, likelihood functions are not the only means of linking data and quantities of interest. Loss functions provide an alternative link, where the quantity of interest is defined, or at least could be defined, as a minimizer of the corresponding risk, or expected loss. In this case, one can obtain what is commonly referred to as a Gibbs posterior distribution by using the empirical risk function directly. This manuscript explores the Gibbs posterior construction, its asymptotic concentration properties, and the frequentist calibration of its credible regions. By being free from the constraints of model specification, Gibbs posteriors create new opportunities for probabilistic inference in modern statistical learning problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset