Log In Sign Up

Provable Representation Learning for Imitation Learning via Bi-level Optimization

by   Sanjeev Arora, et al.

A common strategy in modern learning systems is to learn a representation that is useful for many tasks, a.k.a. representation learning. We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts' trajectories are available. We formulate representation learning as a bi-level optimization problem where the "outer" optimization tries to learn the joint representation and the "inner" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone. Theoretically, we show using our framework that representation learning can provide sample complexity benefits for imitation learning in both settings. We also provide proof-of-concept experiments to verify our theory.


page 1

page 2

page 3

page 4


An Empirical Investigation of Representation Learning for Imitation

Imitation learning often needs a large demonstration set in order to han...

Provable Representation Learning for Imitation with Contrastive Fourier Features

In imitation learning, it is common to learn a behavior policy to match ...

Provably Efficient Third-Person Imitation from Offline Observation

Domain adaptation in imitation learning represents an essential step tow...

Travel the Same Path: A Novel TSP Solving Strategy

In this paper, we provide a novel strategy for solving Traveling Salesma...

Learning Contracting Vector Fields For Stable Imitation Learning

We propose a new non-parametric framework for learning incrementally sta...

Domain-Adversarial and -Conditional State Space Model for Imitation Learning

State representation learning (SRL) in partially observable Markov decis...

Learning Belief Representations for Imitation Learning in POMDPs

We consider the problem of imitation learning from expert demonstrations...