Learning from Physical Human Feedback: An Object-Centric One-Shot Adaptation Method

03/09/2022
by   Alvin Shek, et al.
0

For robots to be effectively deployed in novel environments and tasks, they must be able to understand the feedback expressed by humans during intervention. This can either correct undesirable behavior or indicate additional preferences. Existing methods either require repeated episodes of interactions or assume prior known reward features, which is data-inefficient and can hardly transfer to new tasks. We relax these assumptions by describing human tasks in terms of object-centric sub-tasks and interpreting physical interventions in relation to specific objects. Our method, Object Preference Adaptation (OPA), is composed of two key stages: 1) pre-training a base policy to produce a wide variety of behaviors, and 2) online-updating only certain weights in the model according to human feedback. The key to our fast, yet simple adaptation is that general interaction dynamics between agents and objects are fixed, and only object-specific preferences are updated. Our adaptation occurs online, requires only one human intervention (one-shot), and produces new behaviors never seen during training. Trained on cheap synthetic data instead of expensive human demonstrations, our policy demonstrates impressive adaptation to human perturbations on challenging, realistic tasks in our user study. Videos, code, and supplementary material provided.

READ FULL TEXT

page 1

page 2

page 6

research
07/07/2022

Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction

Humans can leverage physical interaction to teach robot arms. This physi...
research
11/20/2022

Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation

Learning new task-specific skills from a few trials is a fundamental cha...
research
02/27/2023

Active Reward Learning from Online Preferences

Robot policies need to adapt to human preferences and/or new environment...
research
03/10/2022

PLATO: Predicting Latent Affordances Through Object-Centric Play

Constructing a diverse repertoire of manipulation skills in a scalable f...
research
03/14/2021

Meta Preference Learning for Fast User Adaptation in Human-Supervisory Multi-Robot Deployments

As multi-robot systems (MRS) are widely used in various tasks such as na...
research
07/12/2023

Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation

Policies often fail due to distribution shift – changes in the state and...
research
02/05/2022

ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning

Building assistive interfaces for controlling robots through arbitrary, ...

Please sign up or login with your details

Forgot password? Click here to reset