Rethinking Learning Dynamics in RL using Adversarial Networks

01/27/2022
by   Ramnath Kumar, et al.
54

We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space. Our approach is grounded on the intuition that nothing makes you learn better than a coevolving adversary. The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation. We also adapt existing measures of causal attribution to draw insights from the skills learned. Our experiments demonstrate that the adversarial process leads to a better exploration of multiple solutions and understanding the minimum number of different skills necessary to solve a given set of tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2015

Bootstrapping Skills

The monolithic approach to policy representation in Markov Decision Proc...
research
10/21/2019

Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video

Key challenges for the deployment of reinforcement learning (RL) agents ...
research
05/10/2021

Adaptive Policy Transfer in Reinforcement Learning

Efficient and robust policy transfer remains a key challenge for reinfor...
research
06/13/2019

Sub-policy Adaptation for Hierarchical Reinforcement Learning

Hierarchical Reinforcement Learning is a promising approach to long-hori...
research
09/24/2022

Accelerating Reinforcement Learning for Autonomous Driving using Task-Agnostic and Ego-Centric Motion Skills

Efficient and effective exploration in continuous space is a central pro...
research
11/20/2018

Model Learning for Look-ahead Exploration in Continuous Control

We propose an exploration method that incorporates look-ahead search ove...
research
06/27/2012

Learning Parameterized Skills

We introduce a method for constructing skills capable of solving tasks d...

Please sign up or login with your details

Forgot password? Click here to reset