Comparison between Suitable Priors for Additive Bayesian Networks

by   Gilles Kratzer, et al.
Universität Zürich
University of Geneva

Additive Bayesian networks are types of graphical models that extend the usual Bayesian generalized linear model to multiple dependent variables through the factorisation of the joint probability distribution of the underlying variables. When fitting an ABN model, the choice of the prior of the parameters is of crucial importance. If an inadequate prior - like a too weakly informative one - is used, data separation and data sparsity lead to issues in the model selection process. In this work a simulation study between two weakly and a strongly informative priors is presented. As weakly informative prior we use a zero mean Gaussian prior with a large variance, currently implemented in the R-package abn. The second prior belongs to the Student's t-distribution, specifically designed for logistic regressions and, finally, the strongly informative prior is again Gaussian with mean equal to true parameter value and a small variance. We compare the impact of these priors on the accuracy of the learned additive Bayesian network in function of different parameters. We create a simulation study to illustrate Lindley's paradox based on the prior choice. We then conclude by highlighting the good performance of the informative Student's t-prior and the limited impact of the Lindley's paradox. Finally, suggestions for further developments are provided.


page 1

page 2

page 3

page 4


Variational auto-encoders with Student's t-prior

We propose a new structure for the variational auto-encoders (VAEs) prio...

Intuitive principle-based priors for attributing variance in additive model structures

Variance parameters in additive models are often assigned independent pr...

Dynamic Regularizer with an Informative Prior

Regularization methods, specifically those which directly alter weights ...

A principled distance-based prior for the shape of the Weibull model

The use of flat or weakly informative priors is popular due to the objec...

Informative Gaussian Scale Mixture Priors for Bayesian Neural Networks

Encoding domain knowledge into the prior over the high-dimensional weigh...

makemyprior: Intuitive Construction of Joint Priors for Variance Parameters in R

Priors allow us to robustify inference and to incorporate expert knowled...

Please sign up or login with your details

Forgot password? Click here to reset