Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems

10/19/2017
by   Stefan Depeweg, et al.
0

Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data. We study in these models a decomposition of predictive uncertainty into its epistemic and aleatoric components. We show how such a decomposition arises naturally in a Bayesian active learning scenario and develop a new objective for reliable reinforcement learning (RL) with an epistemic and aleatoric risk element. Our experiments illustrate the usefulness of the resulting decomposition in active learning and reliable RL.

READ FULL TEXT
research
06/26/2017

Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

Bayesian neural networks (BNNs) with latent variables are probabilistic ...
research
08/31/2019

Epistemic Uncertainty Sampling

Various strategies for active learning have been proposed in the machine...
research
12/10/2017

Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks

We derive a novel sensitivity analysis of input variables for predictive...
research
03/14/2022

Uncertainty Estimation for Language Reward Models

Language models can learn a range of capabilities from unsupervised trai...
research
02/17/2022

Efficient and Reliable Probabilistic Interactive Learning with Structured Outputs

In this position paper, we study interactive learning for structured out...
research
06/05/2021

Accelerating Stochastic Simulation with Interactive Neural Processes

Stochastic simulations such as large-scale, spatiotemporal, age-structur...

Please sign up or login with your details

Forgot password? Click here to reset