Interpretable Dynamics Models for Data-Efficient Reinforcement Learning

07/10/2019
by   Markus Kaiser, et al.
5

In this paper, we present a Bayesian view on model-based reinforcement learning. We use expert knowledge to impose structure on the transition model and present an efficient learning scheme based on variational inference. This scheme is applied to a heteroskedastic and bimodal benchmark problem on which we compare our results to NFQ and show how our approach yields human-interpretable insight about the underlying dynamics while also increasing data-efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2017

Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning

In this paper we study how to learn stochastic, multimodal transition dy...
research
09/15/2023

A Bayesian Approach to Robust Inverse Reinforcement Learning

We consider a Bayesian approach to offline model-based inverse reinforce...
research
09/19/2018

Interpretable Reinforcement Learning with Ensemble Methods

We propose to use boosted regression trees as a way to compute human-int...
research
10/19/2016

Particle Swarm Optimization for Generating Interpretable Fuzzy Reinforcement Learning Policies

Fuzzy controllers are efficient and interpretable system controllers for...
research
03/13/2017

Reinforcement Learning for Transition-Based Mention Detection

This paper describes an application of reinforcement learning to the men...
research
04/05/2013

Model-based Bayesian Reinforcement Learning for Dialogue Management

Reinforcement learning methods are increasingly used to optimise dialogu...
research
07/22/2023

On-Robot Bayesian Reinforcement Learning for POMDPs

Robot learning is often difficult due to the expense of gathering data. ...

Please sign up or login with your details

Forgot password? Click here to reset