Dropout Inference in Bayesian Neural Networks with Alpha-divergences

03/08/2017
by   Yingzhen Li, et al.
0

To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a re-parametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty.

READ FULL TEXT
research
11/12/2017

Alpha-Divergences in Variational Dropout

We investigate the use of alternative divergences to Kullback-Leibler (K...
research
05/22/2017

Concrete Dropout

Dropout is used as a practical tool to obtain uncertainty estimates in l...
research
07/31/2020

Learning the Distribution: A Unified Distillation Paradigm for Fast Uncertainty Estimation in Computer Vision

Calibrated estimates of uncertainty are critical for many real-world com...
research
03/06/2021

Contextual Dropout: An Efficient Sample-Dependent Dropout Module

Dropout has been demonstrated as a simple and effective module to not on...
research
03/21/2019

Empirical confidence estimates for classification by deep neural networks

How well can we estimate the probability that the classification, C(f(x)...
research
11/05/2016

Robustly representing inferential uncertainty in deep neural networks through sampling

As deep neural networks (DNNs) are applied to increasingly challenging p...
research
09/05/2022

Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training

The quantification of uncertainty is important for the adoption of machi...

Please sign up or login with your details

Forgot password? Click here to reset