Learning Online from Corrective Feedback: A Meta-Algorithm for Robotics

04/02/2021
by   Matthew Schmittle, et al.
0

A key challenge in Imitation Learning (IL) is that optimal state actions demonstrations are difficult for the teacher to provide. For example in robotics, providing kinesthetic demonstrations on a robotic manipulator requires the teacher to control multiple degrees of freedom at once. The difficulty of requiring optimal state action demonstrations limits the space of problems where the teacher can provide quality feedback. As an alternative to state action demonstrations, the teacher can provide corrective feedback such as their preferences or rewards. Prior work has created algorithms designed to learn from specific types of noisy feedback, but across teachers and tasks different forms of feedback may be required. Instead we propose that in order to learn from a diversity of scenarios we need to learn from a variety of feedback. To learn from a variety of feedback we make the following insight: the teacher's cost function is latent and we can model a stream of feedback as a stream of loss functions. We then use any online learning algorithm to minimize the sum of these losses. With this insight we can learn from a diversity of feedback that is weakly correlated with the teacher's true cost function. We unify prior work into a general corrective feedback meta-algorithm and show that regardless of feedback we can obtain the same regret bounds. We demonstrate our approach by learning to perform a household navigation task on a robotic racecar platform. Our results show that our approach can learn quickly from a variety of noisy feedback.

READ FULL TEXT
research
06/07/2019

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Imitation learning allows agents to learn complex behaviors from demonst...
research
04/07/2023

Bridging Action Space Mismatch in Learning from Demonstrations

Learning from demonstrations (LfD) methods guide learning agents to a de...
research
11/12/2022

The Expertise Problem: Learning from Specialized Feedback

Reinforcement learning from human feedback (RLHF) is a powerful techniqu...
research
01/05/2016

Learning Preferences for Manipulation Tasks from Online Coactive Feedback

We consider the problem of learning preferences over trajectories for mo...
research
05/22/2023

Yes, this Way! Learning to Ground Referring Expressions into Actions with Intra-episodic Feedback from Supportive Teachers

The ability to pick up on language signals in an ongoing interaction is ...
research
02/07/2023

Learning Manner of Execution from Partial Corrections

Some actions must be executed in different ways depending on the context...
research
08/25/2019

Combined Task and Action Learning from Human Demonstrations for Mobile Manipulation Applications

Learning from demonstrations is a promising paradigm for transferring kn...

Please sign up or login with your details

Forgot password? Click here to reset