An Analysis of Abstracted Model-Based Reinforcement Learning

08/30/2022
by   Rolf A. N. Starre, et al.
0

Many methods for Model-based Reinforcement learning (MBRL) provide guarantees for both the accuracy of the Markov decision process (MDP) model they can deliver and the learning efficiency. At the same time, state abstraction techniques allow for a reduction of the size of an MDP while maintaining a bounded loss with respect to the original problem. It may come as a surprise, therefore, that no such guarantees are available when combining both techniques, i.e., where MBRL merely observes abstract states. Our theoretical analysis shows that abstraction can introduce a dependence between samples collected online (e.g., in the real world), which means that most results for MBRL can not be directly extended to this setting. The new results in this work show that concentration inequalities for martingales can be used to overcome this problem and allows for extending the results of algorithms such as R-MAX to the setting with abstraction. Thus producing the first performance guarantees for Abstracted RL: model-based reinforcement learning with an abstracted model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2020

Delay-Aware Model-Based Reinforcement Learning for Continuous Control

Action delays degrade the performance of reinforcement learning in many ...
research
02/08/2023

Predictable MDP Abstraction for Unsupervised Model-Based RL

A key component of model-based reinforcement learning (RL) is a dynamics...
research
06/30/2020

Model-based Reinforcement Learning: A Survey

Sequential decision making, commonly formalized as Markov Decision Proce...
research
06/09/2022

Value Memory Graph: A Graph-Structured World Model for Offline Reinforcement Learning

World models in model-based reinforcement learning usually face unrealis...
research
05/11/2020

TOMA: Topological Map Abstraction for Reinforcement Learning

Animals are able to discover the topological map (graph) of surrounding ...
research
03/03/2021

Filter-Based Abstractions with Correctness Guarantees for Planning under Uncertainty

We study planning problems for continuous control systems with uncertain...
research
03/09/2020

Learning discrete state abstractions with deep variational inference

Abstraction is crucial for effective sequential decision making in domai...

Please sign up or login with your details

Forgot password? Click here to reset