StROL: Stabilized and Robust Online Learning from Humans

08/19/2023
by   Shaunak A. Mehta, et al.
0

Today's robots can learn the human's reward function online, during the current interaction. This real-time learning requires fast but approximate learning rules; when the human's behavior is noisy or suboptimal, today's approximations can result in unstable robot learning. Accordingly, in this paper we seek to enhance the robustness and convergence properties of gradient descent learning rules when inferring the human's reward parameters. We model the robot's learning algorithm as a dynamical system over the human preference parameters, where the human's true (but unknown) preferences are the equilibrium point. This enables us to perform Lyapunov stability analysis to derive the conditions under which the robot's learning dynamics converge. Our proposed algorithm (StROL) takes advantage of these stability conditions offline to modify the original learning dynamics: we introduce a corrective term that expands the basins of attraction around likely human rewards. In practice, our modified learning rule can correctly infer what the human is trying to convey, even when the human is noisy, biased, and suboptimal. Across simulations and a user study we find that StROL results in a more accurate estimate and less regret than state-of-the-art approaches for online reward learning. See videos here: https://youtu.be/uDGpkvJnY8g

READ FULL TEXT

page 1

page 4

page 6

research
07/07/2022

Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction

Humans can leverage physical interaction to teach robot arms. This physi...
research
05/16/2023

Reward Learning with Intractable Normalizing Functions

Robots can learn to imitate humans by inferring what the human is optimi...
research
07/06/2021

Physical Interaction as Communication: Learning Robot Objectives Online from Human Corrections

When a robot performs a task next to a human, physical interaction is in...
research
03/23/2022

RILI: Robustly Influencing Latent Intent

When robots interact with human partners, often these partners change th...
research
03/09/2021

Analyzing Human Models that Adapt Online

Predictive human models often need to adapt their parameters online from...
research
11/11/2020

I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set

Assistive robots have the potential to help people perform everyday task...
research
10/10/2019

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning

Robots can learn the right reward function by querying a human expert. E...

Please sign up or login with your details

Forgot password? Click here to reset