Adversarial Exploration Strategy for Self-Supervised Imitation Learning

by   Zhang-Wei Hong, et al.

We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.


page 1

page 2

page 3

page 4


Reinforced Imitation Learning by Free Energy Principle

Reinforcement Learning (RL) requires a large amount of exploration espec...

Learning intuitive physics and one-shot imitation using state-action-prediction self-organizing maps

Human learning and intelligence work differently from the supervised pat...

Saliency Prediction on Omnidirectional Images with Generative Adversarial Imitation Learning

When watching omnidirectional images (ODIs), subjects can access differe...

Playing hard exploration games by watching YouTube

Deep reinforcement learning methods traditionally struggle with tasks wh...

Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections with Deep Reinforcement Learning

Current methods for 3D object reconstruction from a set of planar cross-...

Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation

In recent years, domains such as natural language processing and image r...

Please sign up or login with your details

Forgot password? Click here to reset