Diagnosing and Augmenting Feature Representations in Correctional Inverse Reinforcement Learning

04/11/2023
by   Inês Lourenço, et al.
0

Robots have been increasingly better at doing tasks for humans by learning from their feedback, but still often suffer from model misalignment due to missing or incorrectly learned features. When the features the robot needs to learn to perform its task are missing or do not generalize well to new settings, the robot will not be able to learn the task the human wants and, even worse, may learn a completely different and undesired behavior. Prior work shows how the robot can detect when its representation is missing some feature and can, thus, ask the human to be taught about the new feature; however, these works do not differentiate between features that are completely missing and those that exist but do not generalize to new environments. In the latter case, the robot would detect misalignment and simply learn a new feature, leading to an arbitrarily growing feature representation that can, in turn, lead to spurious correlations and incorrect learning down the line. In this work, we propose separating the two sources of misalignment: we propose a framework for determining whether a feature the robot needs is incorrectly learned and does not generalize to new environment setups vs. is entirely missing from the robot's representation. Once we detect the source of error, we show how the human can initiate the realignment process for the model: if the feature is missing, we follow prior work for learning new features; however, if the feature exists but does not generalize, we use data augmentation to expand its training and, thus, complete the correction. We demonstrate the proposed approach in experiments with a simulated 7DoF robot manipulator and physical human corrections.

READ FULL TEXT

page 1

page 7

research
06/23/2020

Feature Expansive Reward Learning: Rethinking Human Input

In collaborative human-robot scenarios, when a person is not satisfied w...
research
07/06/2021

Physical Interaction as Communication: Learning Robot Objectives Online from Human Corrections

When a robot performs a task next to a human, physical interaction is in...
research
01/18/2022

Inducing Structure in Reward Learning by Learning Features

Reward learning enables robots to learn adaptable behaviors from human i...
research
05/15/2022

Aligning Robot Representations with Humans

As robots are increasingly deployed in real-world scenarios, a key quest...
research
10/31/2020

Deep Reactive Planning in Dynamic Environments

The main novelty of the proposed approach is that it allows a robot to l...
research
01/02/2023

SIRL: Similarity-based Implicit Representation Learning

When robots learn reward functions using high capacity models that take ...

Please sign up or login with your details

Forgot password? Click here to reset