Comparison between Suitable Priors for Additive Bayesian Networks

09/18/2018
by   Gilles Kratzer, et al.
0

Additive Bayesian networks are types of graphical models that extend the usual Bayesian generalized linear model to multiple dependent variables through the factorisation of the joint probability distribution of the underlying variables. When fitting an ABN model, the choice of the prior of the parameters is of crucial importance. If an inadequate prior - like a too weakly informative one - is used, data separation and data sparsity lead to issues in the model selection process. In this work a simulation study between two weakly and a strongly informative priors is presented. As weakly informative prior we use a zero mean Gaussian prior with a large variance, currently implemented in the R-package abn. The second prior belongs to the Student's t-distribution, specifically designed for logistic regressions and, finally, the strongly informative prior is again Gaussian with mean equal to true parameter value and a small variance. We compare the impact of these priors on the accuracy of the learned additive Bayesian network in function of different parameters. We create a simulation study to illustrate Lindley's paradox based on the prior choice. We then conclude by highlighting the good performance of the informative Student's t-prior and the limited impact of the Lindley's paradox. Finally, suggestions for further developments are provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Variational auto-encoders with Student's t-prior

We propose a new structure for the variational auto-encoders (VAEs) prio...
research
02/01/2019

Intuitive principle-based priors for attributing variance in additive model structures

Variance parameters in additive models are often assigned independent pr...
research
10/31/2019

Dynamic Regularizer with an Informative Prior

Regularization methods, specifically those which directly alter weights ...
research
02/16/2020

A principled distance-based prior for the shape of the Weibull model

The use of flat or weakly informative priors is popular due to the objec...
research
02/24/2020

Informative Gaussian Scale Mixture Priors for Bayesian Neural Networks

Encoding domain knowledge into the prior over the high-dimensional weigh...
research
03/22/2023

Knowing what to know: Implications of the choice of prior distribution on the behavior of adaptive design optimization

Adaptive design optimization (ADO) is a state-of-the-art technique for e...
research
05/20/2021

makemyprior: Intuitive Construction of Joint Priors for Variance Parameters in R

Priors allow us to robustify inference and to incorporate expert knowled...

Please sign up or login with your details

Forgot password? Click here to reset