Inspiration Learning through Preferences

by   Nir Baram, et al.

Current imitation learning techniques are too restrictive because they require the agent and expert to share the same action space. However, oftentimes agents that act differently from the expert can solve the task just as good. For example, a person lifting a box can be imitated by a ceiling mounted robot or a desktop-based robotic-arm. In both cases, the end goal of lifting the box is achieved, perhaps using different strategies. We denote this setup as Inspiration Learning - knowledge transfer between agents that operate in different action spaces. Since state-action expert demonstrations can no longer be used, Inspiration learning requires novel methods to guide the agent towards the end goal. In this work, we rely on ideas of Preferential based Reinforcement Learning (PbRL) to design Advantage Actor-Critic algorithms for solving inspiration learning tasks. Unlike classic actor-critic architectures, the critic we use consists of two parts: a) a state-value estimation as in common actor-critic algorithms and b) a single step reward function derived from an expert/agent classifier. We show that our method is capable of extending the current imitation framework to new horizons. This includes continuous-to-discrete action imitation, as well as primitive-to-macro action imitation.


page 6

page 7


Pretraining Deep Actor-Critic Reinforcement Learning Algorithms With Expert Demonstrations

Pretraining with expert demonstrations have been found useful in speedin...

Actor-Expert: A Framework for using Action-Value Methods in Continuous Action Spaces

Value-based approaches can be difficult to use in continuous action spac...

Discriminator Soft Actor Critic without Extrinsic Rewards

It is difficult to be able to imitate well in unknown states from a smal...

Goal-Directed Design Agents: Integrating Visual Imitation with One-Step Lookahead Optimization for Generative Design

Engineering design problems often involve large state and action spaces ...

Diluted Near-Optimal Expert Demonstrations for Guiding Dialogue Stochastic Policy Optimisation

A learning dialogue agent can infer its behaviour from interactions with...

Case-Based Inverse Reinforcement Learning Using Temporal Coherence

Providing expert trajectories in the context of Imitation Learning is of...

Regularized Soft Actor-Critic for Behavior Transfer Learning

Existing imitation learning methods mainly focus on making an agent effe...

Please sign up or login with your details

Forgot password? Click here to reset