An Investigation into Pre-Training Object-Centric Representations for Reinforcement Learning

02/09/2023
by   Jaesik Yoon, et al.
0

Unsupervised object-centric representation (OCR) learning has recently drawn attention as a new paradigm of visual representation. This is because of its potential of being an effective pre-training technique for various downstream tasks in terms of sample efficiency, systematic generalization, and reasoning. Although image-based reinforcement learning (RL) is one of the most important and thus frequently mentioned such downstream tasks, the benefit in RL has surprisingly not been investigated systematically thus far. Instead, most of the evaluations have focused on rather indirect metrics such as segmentation quality and object property prediction accuracy. In this paper, we investigate the effectiveness of OCR pre-training for image-based reinforcement learning via empirical experiments. For systematic evaluation, we introduce a simple object-centric visual RL benchmark and conduct experiments to answer questions such as “Does OCR pre-training improve performance on object-centric tasks?” and “Can OCR pre-training help with out-of-distribution generalization?”. Our results provide empirical evidence for valuable insights into the effectiveness of OCR pre-training for RL and the potential limitations of its use in certain scenarios. Additionally, this study also examines the critical aspects of incorporating OCR pre-training in RL, including performance in a visually complex environment and the appropriate pooling layer to aggregate the object representations.

READ FULL TEXT

page 2

page 3

page 14

research
02/11/2023

Cross-domain Random Pre-training with Prototypes for Reinforcement Learning

Task-agnostic cross-domain pre-training shows great potential in image-b...
research
03/03/2023

RePreM: Representation Pre-training with Masked Model for Reinforcement Learning

Inspired by the recent success of sequence modeling in RL and the use of...
research
04/19/2021

Agent-Centric Representations for Multi-Agent Reinforcement Learning

Object-centric representations have recently enabled significant progres...
research
03/25/2022

Reinforcement Learning with Action-Free Pre-Training from Videos

Recent unsupervised pre-training methods have shown to be effective on l...
research
05/24/2023

Delving Deeper into Data Scaling in Masked Image Modeling

Understanding whether self-supervised learning methods can scale with un...
research
10/19/2022

On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness

Generalization in Reinforcement Learning (RL) aims to learn an agent dur...
research
05/31/2023

Diffused Redundancy in Pre-trained Representations

Representations learned by pre-training a neural network on a large data...

Please sign up or login with your details

Forgot password? Click here to reset