Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems

by   Stefan Depeweg, et al.

Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data. We study in these models a decomposition of predictive uncertainty into its epistemic and aleatoric components. We show how such a decomposition arises naturally in a Bayesian active learning scenario and develop a new objective for reliable reinforcement learning (RL) with an epistemic and aleatoric risk element. Our experiments illustrate the usefulness of the resulting decomposition in active learning and reliable RL.


Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

Bayesian neural networks (BNNs) with latent variables are probabilistic ...

Epistemic Uncertainty Sampling

Various strategies for active learning have been proposed in the machine...

Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks

We derive a novel sensitivity analysis of input variables for predictive...

Uncertainty Estimation for Language Reward Models

Language models can learn a range of capabilities from unsupervised trai...

Efficient and Reliable Probabilistic Interactive Learning with Structured Outputs

In this position paper, we study interactive learning for structured out...

Accelerating Stochastic Simulation with Interactive Neural Processes

Stochastic simulations such as large-scale, spatiotemporal, age-structur...

Please sign up or login with your details

Forgot password? Click here to reset