Efficient RL via Disentangled Environment and Agent Representations

09/05/2023
by   Kevin Gmelin, et al.
0

Agents that are aware of the separation between themselves and their environments can leverage this understanding to form effective representations of visual input. We propose an approach for learning such structured representations for RL algorithms, using visual knowledge of the agent, such as its shape or mask, which is often inexpensive to obtain. This is incorporated into the RL objective using a simple auxiliary loss. We show that our method, Structured Environment-Agent Representations, outperforms state-of-the-art model-free approaches over 18 different challenging visual simulation environments spanning 5 different robots. Website at https://sear-rl.github.io/

READ FULL TEXT
research
05/23/2023

Conditional Mutual Information for Disentangled Representations in Reinforcement Learning

Reinforcement Learning (RL) environments can produce training data with ...
research
04/17/2018

Terrain RL Simulator

We provide 89 challenging simulation environments that range in difficul...
research
07/12/2022

Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning

In real-world robotics applications, Reinforcement Learning (RL) agents ...
research
06/21/2019

Shaping Belief States with Generative Environment Models for RL

When agents interact with a complex environment, they must form and main...
research
06/09/2022

Deep Surrogate Assisted Generation of Environments

Recent progress in reinforcement learning (RL) has started producing gen...
research
12/04/2019

Visual Reaction: Learning to Play Catch with Your Drone

In this paper we address the problem of visual reaction: the task of int...
research
04/07/2021

Unsupervised Visual Attention and Invariance for Reinforcement Learning

Vision-based reinforcement learning (RL) is successful, but how to gener...

Please sign up or login with your details

Forgot password? Click here to reset