Overpruning in Variational Bayesian Neural Networks

01/18/2018
by   Brian Trippe, et al.
0

The motivations for using variational inference (VI) in neural networks differ significantly from those in latent variable models. This has a counter-intuitive consequence; more expressive variational approximations can provide significantly worse predictions as compared to those with less expressive families. In this work we make two contributions. First, we identify a cause of this performance gap, variational over-pruning. Second, we introduce a theoretically grounded explanation for this phenomenon. Our perspective sheds light on several related published results and provides intuition into the design of effective variational approximations of neural networks.

READ FULL TEXT
research
05/29/2017

Implicit Variational Inference with Kernel Density Ratio Fitting

Recent progress in variational inference has paid much attention to the ...
research
10/09/2018

Fixing Variational Bayes: Deterministic Variational Inference for Bayesian Neural Networks

Bayesian neural networks (BNNs) hold great promise as a flexible and pri...
research
11/07/2015

Hierarchical Variational Models

Black box variational inference allows researchers to easily prototype a...
research
12/07/2022

Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning

Network pruning is a promising way to generate light but accurate models...
research
07/10/2018

Latent Alignment and Variational Attention

Neural attention has become central to many state-of-the-art models in n...
research
06/10/2022

PAVI: Plate-Amortized Variational Inference

Given some observed data and a probabilistic generative model, Bayesian ...
research
11/10/2014

Deep Exponential Families

We describe deep exponential families (DEFs), a class of latent variable...

Please sign up or login with your details

Forgot password? Click here to reset