Towards Online Learning from Corrective Demonstrations

10/02/2018
by   Reymundo A. Gutierrez, et al.
0

Robots operating in real-world human environments will likely encounter task execution failures. To address this, we would like to allow co-present humans to refine the robot's task model as errors are encountered. Existing approaches to task model modification require reasoning over the entire dataset and model, limiting the rate of corrective updates. We introduce the State-Indexed Task Updates (SITU) algorithm to efficiently incorporate corrective demonstrations into an existing task model by iteratively making local updates that only require reasoning over a small subset of the model. In future work, we will evaluate this approach with a user study.

READ FULL TEXT
research
03/03/2022

Reasoning about Counterfactuals to Improve Human Inverse Reinforcement Learning

To collaborate well with robots, we must be able to understand their dec...
research
06/23/2023

AR2-D2:Training a Robot Without a Robot

Diligently gathered human demonstrations serve as the unsung heroes empo...
research
11/04/2019

Learning One-Shot Imitation from Humans without Humans

Humans can naturally learn to execute a new task by seeing it performed ...
research
01/06/2023

"No, to the Right" – Online Language Corrections for Robotic Manipulation via Shared Autonomy

Systems for language-guided human-robot interaction must satisfy two key...
research
09/28/2021

Competence-Aware Path Planning via Introspective Perception

Robots deployed in the real world over extended periods of time need to ...
research
02/24/2021

Learning to Shift Attention for Motion Generation

One challenge of motion generation using robot learning from demonstrati...
research
12/04/2018

Learning from Intended Corrections

Our goal is to enable robots to learn cost functions from user guidance....

Please sign up or login with your details

Forgot password? Click here to reset