Learning Representations for Pixel-based Control: What Matters and Why?

11/15/2021
by   Manan Tomar, et al.
0

Learning representations for pixel-based control has garnered significant attention recently in reinforcement learning. A wide range of methods have been proposed to enable efficient learning, leading to sample complexities similar to those in the full state setting. However, moving beyond carefully curated pixel data sets (centered crop, appropriate lighting, clear background, etc.) remains challenging. In this paper, we adopt a more difficult setting, incorporating background distractors, as a first step towards addressing this challenge. We present a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning. We then analyze when and why previously proposed methods are likely to fail or reduce to the same performance as the baseline in this harder setting and why we should think carefully about extending such methods beyond the well curated environments. Our results show that finer categorization of benchmarks on the basis of characteristics like density of reward, planning horizon of the problem, presence of task-irrelevant components, etc., is crucial in evaluating algorithms. Based on these observations, we propose different metrics to consider when evaluating an algorithm on benchmark tasks. We hope such a data-centric view can motivate researchers to rethink representation learning when investigating how to best apply RL to real-world tasks.

READ FULL TEXT

page 4

page 17

research
10/31/2022

Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information

Learning to control an agent from data collected offline in a rich pixel...
research
03/07/2023

Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning

Model-based reinforcement learning (MBRL) with real-time planning has sh...
research
01/07/2021

The Distracting Control Suite – A Challenging Benchmark for Reinforcement Learning from Pixels

Robots have to face challenging perceptual settings, including changes i...
research
10/27/2021

DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations

Top-performing Model-Based Reinforcement Learning (MBRL) agents, such as...
research
02/22/2021

Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning

Meta-learning for offline reinforcement learning (OMRL) is an understudi...
research
08/31/2023

RePo: Resilient Model-Based Reinforcement Learning by Regularizing Posterior Predictability

Visual model-based RL methods typically encode image observations into l...
research
11/20/2022

Joint Embedding Predictive Architectures Focus on Slow Features

Many common methods for learning a world model for pixel-based environme...

Please sign up or login with your details

Forgot password? Click here to reset