Weight Uncertainty in Neural Networks

05/20/2015
by   Charles Blundell, et al.
0

We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2022

Variational Neural Networks

Bayesian Neural Networks (BNNs) provide a tool to estimate the uncertain...
research
02/25/2022

Learning Invariant Weights in Neural Networks

Assumptions about invariances or symmetries in data can significantly in...
research
11/03/2017

Implicit Weight Uncertainty in Neural Networks

We interpret HyperNetworks within the framework of variational inference...
research
02/21/2023

Improved uncertainty quantification for neural networks with Bayesian last layer

Uncertainty quantification is an essential task in machine learning - a ...
research
12/05/2016

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

We evaluate the uncertainty quality in neural networks using anomaly det...
research
06/13/2018

Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam

Uncertainty computation in deep learning is essential to design robust a...
research
01/18/2021

Classification of fNIRS Data Under Uncertainty: A Bayesian Neural Network Approach

Functional Near-Infrared Spectroscopy (fNIRS) is a non-invasive form of ...

Please sign up or login with your details

Forgot password? Click here to reset