Deep Probabilistic Ensembles: Approximate Variational Inference through KL Regularization

11/06/2018
by   Kashyap Chitta, et al.
0

In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep Bayesian Neural Network (BNN). We do so by incorporating a KL divergence penalty term into the training objective of an ensemble, derived from the evidence lower bound used in variational inference. We evaluate the uncertainty estimates obtained from our models for active learning on visual classification, consistently outperforming baselines and existing approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2018

Large-Scale Visual Active Learning with Deep Probabilistic Ensembles

Annotating the right data for training deep neural networks is an import...
research
11/12/2017

Alpha-Divergences in Variational Dropout

We investigate the use of alternative divergences to Kullback-Leibler (K...
research
02/01/2020

Interpreting a Penalty as the Influence of a Bayesian Prior

In machine learning, it is common to optimize the parameters of a probab...
research
05/24/2023

A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods

We establish the first mathematically rigorous link between Bayesian, va...
research
07/10/2023

Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference

We provide a rigorous analysis of training by variational inference (VI)...
research
10/29/2018

Variational Inference with Tail-adaptive f-Divergence

Variational inference with α-divergences has been widely used in modern ...
research
12/08/2018

Adaptive and Calibrated Ensemble Learning with Dependent Tail-free Process

Ensemble learning is a mainstay in modern data science practice. Convent...

Please sign up or login with your details

Forgot password? Click here to reset