Adversarial Intrinsic Motivation for Reinforcement Learning

05/27/2021
by   Ishan Durugkar, et al.
12

Learning with an objective to minimize the mismatch with a reference distribution has been shown to be useful for generative modeling and imitation learning. In this paper, we investigate whether one such objective, the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution, can be utilized effectively for reinforcement learning (RL) tasks. Specifically, this paper focuses on goal-conditioned reinforcement learning where the idealized (unachievable) target distribution has full measure at the goal. We introduce a quasimetric specific to Markov Decision Processes (MDPs), and show that the policy that minimizes the Wasserstein-1 distance of its state visitation distribution to this target distribution under this quasimetric is the policy that reaches the goal in as few steps as possible. Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function. Our experiments show that this reward function changes smoothly with respect to transitions in the MDP and assists the agent in learning. Additionally, we combine AIM with Hindsight Experience Replay (HER) and show that the resulting algorithm accelerates learning significantly on several simulated robotics tasks when compared to HER with a sparse positive reward at the goal state.

READ FULL TEXT

page 8

page 20

page 21

research
06/05/2020

Wasserstein Distance guided Adversarial Imitation Learning with Reward Shape Exploration

The generative adversarial imitation learning (GAIL) has provided an adv...
research
02/04/2021

Hybrid Adversarial Inverse Reinforcement Learning

In this paper, we investigate the problem of the inverse reinforcement l...
research
04/11/2021

Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning

It is of significance for an agent to learn a widely applicable and gene...
research
07/16/2023

Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning

Goal-conditioned reinforcement learning (RL) is an interesting extension...
research
09/02/2022

TarGF: Learning Target Gradient Field for Object Rearrangement

Object Rearrangement is to move objects from an initial state to a goal ...
research
06/04/2020

Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion

We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO...
research
09/26/2022

Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization

Hindsight goal relabeling has become a foundational technique for multi-...

Please sign up or login with your details

Forgot password? Click here to reset