Imitating Latent Policies from Observation

05/21/2018
by   Ashley D. Edwards, et al.
0

We describe a novel approach to imitation learning that infers latent policies directly from state observations. We introduce a method that characterizes the causal effects of unknown actions on observations while simultaneously predicting their likelihood. We then outline an action alignment procedure that leverages a small amount of environment interactions to determine a mapping between latent and real-world actions. We show that this corrected labeling can be used for imitating the observed behavior, even though no expert actions are given. We evaluate our approach within classic control and photo-realistic visual environments and demonstrate that it performs well when compared to standard approaches.

READ FULL TEXT
research
07/08/2021

Imitation by Predicting Observations

Imitation learning enables agents to reuse and adapt the hard-won expert...
research
05/22/2022

Chain of Thought Imitation with Procedure Cloning

Imitation learning aims to extract high-performance policies from logged...
research
08/20/2023

Karma: Adaptive Video Streaming via Causal Sequence Modeling

Optimal adaptive bitrate (ABR) decision depends on a comprehensive chara...
research
07/29/2023

Initial State Interventions for Deconfounded Imitation Learning

Imitation learning suffers from causal confusion. This phenomenon occurs...
research
05/28/2019

Causal Confusion in Imitation Learning

Behavioral cloning reduces policy learning to supervised learning by tra...
research
01/16/2013

Building a Stochastic Dynamic Model of Application Use

Many intelligent user interfaces employ application and user models to d...
research
11/10/2018

Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

We propose an approach for mapping natural language instructions and raw...

Please sign up or login with your details

Forgot password? Click here to reset