Case-Based Inverse Reinforcement Learning Using Temporal Coherence

by   Jonas Nüßlein, et al.

Providing expert trajectories in the context of Imitation Learning is often expensive and time-consuming. The goal must therefore be to create algorithms which require as little expert data as possible. In this paper we present an algorithm that imitates the higher-level strategy of the expert rather than just imitating the expert on action level, which we hypothesize requires less expert data and makes training more stable. As a prior, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning. The target state area is unknown, but since the expert has demonstrated how to reach it, the agent tries to reach states similar to the expert. Building on the idea of Temporal Coherence, our algorithm trains a neural network to predict whether two states are similar, in the sense that they may occur close in time. During inference, the agent compares its current state with expert states from a Case Base for similarity. The results show that our approach can still learn a near-optimal policy in settings with very little expert data, where algorithms that try to imitate the expert at the action level can no longer do so.


page 1

page 2

page 3

page 4


Hybrid Reinforcement Learning with Expert State Sequences

Existing imitation learning approaches often require that the complete d...

Provably Efficient Imitation Learning from Observation Alone

We study Imitation Learning (IL) from Observations alone (ILFO) in large...

Loss-annealed GAIL for sample efficient and stable Imitation Learning

Imitation learning is the problem of learning a policy from an expert po...

Inspiration Learning through Preferences

Current imitation learning techniques are too restrictive because they r...

Transfer Learning for Prosthetics Using Imitation Learning

In this paper, We Apply Reinforcement learning (RL) techniques to train ...

ProtoX: Explaining a Reinforcement Learning Agent via Prototyping

While deep reinforcement learning has proven to be successful in solving...

Expert Q-learning: Deep Q-learning With State Values From Expert Examples

We propose a novel algorithm named Expert Q-learning. Expert Q-learning ...

Please sign up or login with your details

Forgot password? Click here to reset