Rotation, Translation, and Cropping for Zero-Shot Generalization

01/27/2020
by   Chang Ye, et al.
5

Deep Reinforcement Learning (DRL) has shown impressive performance on domains with visual inputs, in particular various games. However, the agent is usually trained on a fixed environment, e.g. a fixed number of levels. A growing mass of evidence suggests that these trained models fail to generalize to even slight variations of the environments they were trained on. This paper advances the hypothesis that the lack of generalization is partly due to the input representation, and explores how rotation, cropping and translation could increase generality. We show that a cropped, translated and rotated observation can get better generalization on unseen levels of a two-dimensional arcade game. The generality of the agent is evaluated on a set of human-designed levels.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
06/28/2018

Procedural Level Generation Improves Generality of Deep Reinforcement Learning

Over the last few years, deep reinforcement learning (RL) has shown impr...
research
04/17/2019

Rogue-Gym: A New Challenge for Generalization in Reinforcement Learning

This paper presents Rogue-Gym, that enables agents to learn and play a s...
research
06/17/2021

SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies

Generalization has been a long-standing challenge for reinforcement lear...
research
02/14/2021

Domain Adversarial Reinforcement Learning

We consider the problem of generalization in reinforcement learning wher...
research
06/27/2021

Continuous Control with Deep Reinforcement Learning for Autonomous Vessels

Maritime autonomous transportation has played a crucial role in the glob...
research
11/02/2020

Instance based Generalization in Reinforcement Learning

Agents trained via deep reinforcement learning (RL) routinely fail to ge...

Please sign up or login with your details

Forgot password? Click here to reset