MAME : Model-Agnostic Meta-Exploration

by   Swaminathan Gurumurthy, et al.
Carnegie Mellon University

Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches towards finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are often quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation allows for more efficient inner-loop updates and also demonstrate the superior performance of our model compared to prior works in this domain.


page 1

page 2

page 3

page 4


Meta-Reinforcement Learning of Structured Exploration Strategies

Exploration is a fundamental challenge in reinforcement learning (RL). M...

Learn to Effectively Explore in Context-Based Meta-RL

Meta reinforcement learning (meta-RL) provides a principled approach for...

Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy

A fundamental issue in reinforcement learning algorithms is the balance ...

Meta Reinforcement Learning with Distribution of Exploration Parameters Learned by Evolution Strategies

In this paper, we propose a novel meta-learning method in a reinforcemen...

Meta Reinforcement Learning for Sim-to-real Domain Adaptation

Modern reinforcement learning methods suffer from low sample efficiency ...

Learning Exploration Strategies to Solve Real-World Marble Runs

Tasks involving locally unstable or discontinuous dynamics (such as bifu...

Efficient Exploration via State Marginal Matching

To solve tasks with sparse rewards, reinforcement learning algorithms mu...

Please sign up or login with your details

Forgot password? Click here to reset