Implicit Weight Uncertainty in Neural Networks

11/03/2017
by   Nick Pawlowski, et al.
0

We interpret HyperNetworks within the framework of variational inference within implicit distributions. Our method, Bayes by Hypernet, is able to model a richer variational distribution than previous methods. Experiments show that it achieves comparable predictive performance on the MNIST classification task while providing higher predictive uncertainties compared to MC-Dropout and regular maximum likelihood training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2018

Unbiased Implicit Variational Inference

We develop unbiased implicit variational inference (UIVI), a method that...
research
02/28/2018

Predictive Uncertainty Estimation via Prior Networks

Estimating uncertainty is important to improving the safety of AI system...
research
08/07/2020

Investigating maximum likelihood based training of infinite mixtures for uncertainty quantification

Uncertainty quantification in neural networks gained a lot of attention ...
research
10/08/2021

Is MC Dropout Bayesian?

MC Dropout is a mainstream "free lunch" method in medical imaging for ap...
research
05/20/2015

Weight Uncertainty in Neural Networks

We introduce a new, efficient, principled and backpropagation-compatible...
research
12/05/2016

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

We evaluate the uncertainty quality in neural networks using anomaly det...
research
06/02/2023

Resource-Efficient Federated Hyperdimensional Computing

In conventional federated hyperdimensional computing (HDC), training lar...

Please sign up or login with your details

Forgot password? Click here to reset