Combined Reinforcement Learning via Abstract Representations

09/12/2018
by   Vincent Francois-Lavet, et al.
0

In the quest for efficient and robust reinforcement learning methods, both model-free and model-based approaches offer advantages. In this paper we propose a new way of explicitly bridging both approaches via a shared low-dimensional learned encoding of the environment, meant to capture summarizing abstractions. We show that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space. In addition, this approach recovers a sufficient low-dimensional representation of the environment, which opens up new strategies for interpretable AI, exploration and transfer learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2021

Component Transfer Learning for Deep RL Based on Abstract Representations

In this work we investigate a specific transfer learning approach for de...
research
03/07/2016

Learning Shared Representations in Multi-task Reinforcement Learning

We investigate a paradigm in multi-task reinforcement learning (MT-RL) i...
research
09/20/2020

Latent Representation Prediction Networks

Deeply-learned planning methods are often based on learning representati...
research
12/21/2021

Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding

The growing trend of fledgling reinforcement learning systems making the...
research
02/08/2018

Learning and Querying Fast Generative Models for Reinforcement Learning

A key challenge in model-based reinforcement learning (RL) is to synthes...
research
09/07/2022

Concept-modulated model-based offline reinforcement learning for rapid generalization

The robustness of any machine learning solution is fundamentally bound b...
research
09/25/2017

The Consciousness Prior

A new prior is proposed for representation learning, which can be combin...

Please sign up or login with your details

Forgot password? Click here to reset