Rapid Risk Minimization with Bayesian Models Through Deep Learning Approximation

03/29/2021
by   Mathias Löwe, et al.
0

In this paper, we introduce a novel combination of Bayesian Models (BMs) and Neural Networks (NNs) for making predictions with a minimum expected risk. Our approach combines the best of both worlds, the data efficiency and interpretability of a BM with the speed of a NN. For a BM, making predictions with the lowest expected loss requires integrating over the posterior distribution. In cases for which exact inference of the posterior predictive distribution is intractable, approximation methods are typically applied, e.g. Monte Carlo (MC) simulation. The more samples, the higher the accuracy – but at the expense of increased computational cost. Our approach removes the need for iterative MC simulation on the CPU at prediction time. In brief, it works by fitting a NN to synthetic data generated using the BM. In a single feed-forward pass of the NN, it gives a set of point-wise approximations to the BM's posterior predictive distribution for a given observation. We achieve risk minimized predictions significantly faster than standard methods with a negligible loss on the testing dataset. We combine this approach with Active Learning (AL) to minimize the amount of data required for fitting the NN. This is done by iteratively labeling more data in regions with high predictive uncertainty of the NN.

READ FULL TEXT

page 1

page 7

research
08/24/2023

Single-shot Bayesian approximation for neural networks

Deep neural networks (NNs) are known for their high-prediction performan...
research
05/29/2019

Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over Simplex

Estimating the uncertainty of a Bayesian model has been investigated for...
research
05/20/2022

Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks

Monte Carlo (MC) integration is the de facto method for approximating th...
research
07/14/2023

Variational Prediction

Bayesian inference offers benefits over maximum likelihood, but it also ...
research
02/05/2021

Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach

Variational autoencoder (VAE) is a very successful generative model whos...
research
05/16/2020

Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks

In this paper, we present a general framework for distilling expectation...
research
01/22/2020

Neural Networks in Evolutionary Dynamic Constrained Optimization: Computational Cost and Benefits

Neural networks (NN) have been recently applied together with evolutiona...

Please sign up or login with your details

Forgot password? Click here to reset