Bayesian Cross Validation and WAIC for Predictive Prior Design in Regular Asymptotic Theory

03/27/2015
by   Sumio Watanabe, et al.
0

Prior design is one of the most important problems in both statistics and machine learning. The cross validation (CV) and the widely applicable information criterion (WAIC) are predictive measures of the Bayesian estimation, however, it has been difficult to apply them to find the optimal prior because their mathematical properties in prior evaluation have been unknown and the region of the hyperparameters is too wide to be examined. In this paper, we derive a new formula by which the theoretical relation among CV, WAIC, and the generalization loss is clarified and the optimal hyperparameter can be directly found. By the formula, three facts are clarified about predictive prior design. Firstly, CV and WAIC have the same second order asymptotic expansion, hence they are asymptotically equivalent to each other as the optimizer of the hyperparameter. Secondly, the hyperparameter which minimizes CV or WAIC makes the average generalization loss to be minimized asymptotically but does not the random generalization loss. And lastly, by using the mathematical relation between priors, the variances of the optimized hyperparameters by CV and WAIC are made smaller with small computational costs. Also we show that the optimized hyperparameter by DIC or the marginal likelihood does not minimize the average or random generalization loss in general.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2022

Mathematical Theory of Bayesian Statistics for Unknown Information Source

In statistical inference, uncertainty is unknown and all models are wron...
research
05/26/2022

A proof of consistency and model-selection optimality on the empirical Bayes method

We study the consistency and optimality of the maximum marginal likeliho...
research
12/28/2017

Accurate Bayesian Data Classification without Hyperparameter Cross-validation

We extend the standard Bayesian multivariate Gaussian generative data cl...
research
04/28/2023

Hyperparameter Optimization through Neural Network Partitioning

Well-tuned hyperparameters are crucial for obtaining good generalization...
research
04/11/2014

Bayesian image segmentations by Potts prior and loopy belief propagation

This paper presents a Bayesian image segmentation model based on Potts p...
research
02/16/2020

Learning Adaptive Loss for Robust Learning with Noisy Labels

Robust loss minimization is an important strategy for handling robust le...
research
11/20/2020

Optimizing Approximate Leave-one-out Cross-validation to Tune Hyperparameters

For a large class of regularized models, leave-one-out cross-validation ...

Please sign up or login with your details

Forgot password? Click here to reset