Imitation Learning with Recurrent Neural Networks

07/18/2016
by   Khanh Nguyen, et al.
0

We present a novel view that unifies two frameworks that aim to solve sequential prediction problems: learning to search (L2S) and recurrent neural networks (RNN). We point out equivalences between elements of the two frameworks. By complementing what is missing from one framework comparing to the other, we introduce a more advanced imitation learning framework that, on one hand, augments L2S s notion of search space and, on the other hand, enhances RNNs training procedure to be more robust to compounding errors arising from training on highly correlated examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2020

The Past and Present of Imitation Learning: A Citation Chain Study

Imitation Learning is a promising area of active research. Over the last...
research
01/19/2018

Global overview of Imitation Learning

Imitation Learning is a sequential task where the learner tries to mimic...
research
11/16/2018

An Algorithmic Perspective on Imitation Learning

As robots and other intelligent agents move from simple environments and...
research
02/16/2020

Correlated Adversarial Imitation Learning

A novel imitation learning algorithm is introduced by applying a game-th...
research
12/14/2021

Modeling Strong and Human-Like Gameplay with KL-Regularized Search

We consider the task of building strong but human-like policies in multi...
research
11/29/2018

Learning Finite State Representations of Recurrent Policy Networks

Recurrent neural networks (RNNs) are an effective representation of cont...
research
09/25/2017

Predictive-State Decoders: Encoding the Future into Recurrent Networks

Recurrent neural networks (RNNs) are a vital modeling technique that rel...

Please sign up or login with your details

Forgot password? Click here to reset