Log In Sign Up

Minimum Excess Risk in Bayesian Learning

by   Aolin Xu, et al.

We analyze the best achievable performance of Bayesian learning under generative models by defining and upper-bounding the minimum excess risk (MER): the gap between the minimum expected loss attainable by learning from data and the minimum expected loss that could be achieved if the model realization were known. The definition of MER provides a principled way to define different notions of uncertainties in Bayesian learning, including the aleatoric uncertainty and the minimum epistemic uncertainty. Two methods for deriving upper bounds for the MER are presented. The first method, generally suitable for Bayesian learning with a parametric generative model, upper-bounds the MER by the conditional mutual information between the model parameters and the quantity being predicted given the observed data. It allows us to quantify the rate at which the MER decays to zero as more data becomes available. The second method, particularly suitable for Bayesian learning with a parametric predictive model, relates the MER to the deviation of the posterior predictive distribution from the true predictive model, and further to the minimum estimation error of the model parameters from data. It explicitly shows how the uncertainty in model parameter estimation translates to the MER and to the final prediction uncertainty. We also extend the definition and analysis of MER to the setting with multiple parametric model families and the setting with nonparametric models. Along the discussions we draw some comparisons between the MER in Bayesian learning and the excess risk in frequentist learning.


page 1

page 2

page 3

page 4


Rate-Distortion Analysis of Minimum Excess Risk in Bayesian Learning

Minimum Excess Risk (MER) in Bayesian learning is defined as the differe...

Information-Theoretic Analysis of Epistemic Uncertainty in Bayesian Meta-learning

The overall predictive uncertainty of a trained predictor can be decompo...

Rate-Distortion Bounds on Bayes Risk in Supervised Learning

We present an information-theoretic framework for bounding the number of...

Analytic Mutual Information in Bayesian Neural Networks

Bayesian neural networks have successfully designed and optimized a robu...

An Information-Theoretic Analysis of Bayesian Reinforcement Learning

Building on the framework introduced by Xu and Raginksy [1] for supervis...

Expert Elicitation and Data Noise Learning for Material Flow Analysis using Bayesian Inference

Bayesian inference allows the transparent communication of uncertainty i...

Bayes in Wonderland! Predictive supervised classification inference hits unpredictability

The marginal Bayesian predictive classifiers (mBpc) as opposed to the si...