Some Considerations on Learning to Explore via Meta-Reinforcement Learning

03/03/2018
by   Bradly C. Stadie, et al.
0

We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and E-RL^2. Results are presented on a novel environment we call `Krazy World' and a set of maze environments. We show E-MAML and E-RL^2 deliver better performance on tasks where exploration is important.

READ FULL TEXT

page 4

page 5

research
07/06/2021

Meta-Reinforcement Learning for Heuristic Planning

In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of...
research
06/12/2020

A Brief Look at Generalization in Visual Meta-Reinforcement Learning

Due to the realization that deep reinforcement learning algorithms train...
research
10/09/2019

Improving Generalization in Meta Reinforcement Learning using Learned Objectives

Biological evolution has distilled the experiences of many learners into...
research
10/11/2021

REIN-2: Giving Birth to Prepared Reinforcement Learning Agents Using Reinforcement Learning Agents

Deep Reinforcement Learning (Deep RL) has been in the spotlight for the ...
research
09/01/2021

A Survey of Exploration Methods in Reinforcement Learning

Exploration is an essential component of reinforcement learning algorith...
research
07/09/2020

EVO-RL: Evolutionary-Driven Reinforcement Learning

In this work, we propose a novel approach for reinforcement learning dri...
research
02/03/2019

A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning

In this paper we consider the problem of how a reinforcement learning ag...

Please sign up or login with your details

Forgot password? Click here to reset