Log In Sign Up

The Atari Grand Challenge Dataset

by   Vitaly Kurin, et al.

Recent progress in Reinforcement Learning (RL), fueled by its combination, with Deep Learning has enabled impressive results in learning to interact with complex virtual environments, yet real-world applications of RL are still scarce. A key limitation is data efficiency, with current state-of-the-art approaches requiring millions of training samples. A promising way to tackle this problem is to augment RL with learning from human demonstrations. However, human demonstration data is not yet readily available. This hinders progress in this direction. The present work addresses this problem as follows. We (i) collect and describe a large dataset of human Atari 2600 replays -- the largest and most diverse such data set publicly released to date, (ii) illustrate an example use of this dataset by analyzing the relation between demonstration quality and imitation learning performance, and (iii) outline possible research directions that are opened up by our work.


Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale

We present a large-scale study of imitating human demonstrations on task...

Self-Imitation Learning by Planning

Imitation learning (IL) enables robots to acquire skills quickly by tran...

PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav

We study ObjectGoal Navigation - where a virtual robot situated in a new...

D-Shape: Demonstration-Shaped Reinforcement Learning via Goal Conditioning

While combining imitation learning (IL) and reinforcement learning (RL) ...

Go-Explore: a New Approach for Hard-Exploration Problems

A grand challenge in reinforcement learning is intelligent exploration, ...

Code Repositories


Code for 'The Grand Atari Challenge dataset' paper

view repo