How to use KL-divergence to construct conjugate priors, with well-defined non-informative limits, for the multivariate Gaussian

09/15/2021
by   Niko Brümmer, et al.
0

The Wishart distribution is the standard conjugate prior for the precision of the multivariate Gaussian likelihood, when the mean is known – while the normal-Wishart can be used when the mean is also unknown. It is however not so obvious how to assign values to the hyperparameters of these distributions. In particular, when forming non-informative limits of these distributions, the shape (or degrees of freedom) parameter of the Wishart must be handled with care. The intuitive solution of directly interpreting the shape as a pseudocount and letting it go to zero, as proposed by some authors, violates the restrictions on the shape parameter. We show how to use the scaled KL-divergence between multivariate Gaussians as an energy function to construct Wishart and normal-Wishart conjugate priors. When used as informative priors, the salient feature of these distributions is the mode, while the KL scaling factor serves as the pseudocount. The scale factor can be taken down to the limit at zero, to form non-informative priors that do not violate the restrictions on the Wishart shape parameter. This limit is non-informative in the sense that the posterior mode is identical to the maximum likelihood estimate of the parameters of the Gaussian.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset