Sample-Efficient Reinforcement Learning through Transfer and Architectural Priors

01/07/2018
by   Benjamin Spector, et al.
0

Recent work in deep reinforcement learning has allowed algorithms to learn complex tasks such as Atari 2600 games just from the reward provided by the game, but these algorithms presently require millions of training steps in order to learn, making them approximately five orders of magnitude slower than humans. One reason for this is that humans build robust shared representations that are applicable to collections of problems, making it much easier to assimilate new variants. This paper first introduces the idea of automatically-generated game sets to aid in transfer learning research, and then demonstrates the utility of shared representations by showing that models can substantially benefit from the incorporation of relevant architectural priors. This technique affords a remarkable 50x positive transfer on a toy problem-set.

READ FULL TEXT

page 2

page 4

page 5

page 6

research
09/10/2018

Keep it stupid simple

Deep reinforcement learning can match and exceed human performance, but ...
research
09/18/2016

Playing FPS Games with Deep Reinforcement Learning

Advances in deep reinforcement learning have allowed autonomous agents t...
research
10/27/2020

Behavior Priors for Efficient Reinforcement Learning

As we deploy reinforcement learning agents to solve increasingly challen...
research
09/14/2017

Shared Learning : Enhancing Reinforcement in Q-Ensembles

Deep Reinforcement Learning has been able to achieve amazing successes i...
research
07/13/2017

Distral: Robust Multitask Reinforcement Learning

Most deep reinforcement learning algorithms are data inefficient in comp...
research
09/18/2016

Towards Deep Symbolic Reinforcement Learning

Deep reinforcement learning (DRL) brings the power of deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset