General Bayesian L^2 calibration of mathematical models

03/01/2021
by   Antony M. Overstall, et al.
0

A general Bayesian method for L^2 calibration of a mathematical model is presented. General Bayesian inference starts with the specification of a loss function. Then, the log-likelihood in Bayes' theorem is replaced by the negative loss. While the minimiser of the loss function is unchanged by, for example, multiplying the loss by a constant, the same is not true of the resulting general posterior distribution. To address this problem in the context of L^2 calibration of mathematical models, different automatic scalings of the general Bayesian posterior are proposed. These are based on equating asymptotic properties of the general Bayesian posterior and the minimiser of the L^2 loss.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset