Shared Autonomy with Learned Latent Actions

05/07/2020
by   Hong Jun Jeon, et al.
1

Assistive robots enable people with disabilities to conduct everyday tasks on their own. However, these tasks can be complex, containing both coarse reaching motions and fine-grained manipulation. For example, when eating, not only does one need to move to the correct food item, but they must also precisely manipulate the food in different ways (e.g., cutting, stabbing, scooping). Shared autonomy methods make robot teleoperation safer and more precise by arbitrating user inputs with robot controls. However, these works have focused mainly on the high-level task of reaching a goal from a discrete set, while largely ignoring manipulation of objects at that goal. Meanwhile, dimensionality reduction techniques for teleoperation map useful high-dimensional robot actions into an intuitive low-dimensional controller, but it is unclear if these methods can achieve the requisite precision for tasks like eating. Our insight is that—by combining intuitive embeddings from learned latent actions with robotic assistance from shared autonomy—we can enable precise assistive manipulation. In this work, we adopt learned latent actions for shared autonomy by proposing a new model structure that changes the meaning of the human's input based on the robot's confidence of the goal. We show convergence bounds on the robot's distance to the most likely goal, and develop a training procedure to learn a controller that is able to move between goals even in the presence of shared autonomy. We evaluate our method in simulations and an eating user study.

READ FULL TEXT

page 1

page 7

page 8

research
07/06/2021

Learning Latent Actions to Control Assistive Robots

Assistive robot arms enable people with disabilities to conduct everyday...
research
09/20/2019

Controlling Assistive Robots with Learned Latent Actions

Assistive robots enable users with disabilities to perform everyday task...
research
05/02/2021

Learning Visually Guided Latent Actions for Assistive Teleoperation

It is challenging for humans – particularly those living with physical d...
research
10/04/2021

Discovering Synergies for Robot Manipulation with Multi-Task Reinforcement Learning

Controlling robotic manipulators with high-dimensional action spaces for...
research
11/05/2021

LILA: Language-Informed Latent Actions

We introduce Language-Informed Latent Actions (LILA), a framework for le...
research
01/06/2023

"No, to the Right" – Online Language Corrections for Robotic Manipulation via Shared Autonomy

Systems for language-guided human-robot interaction must satisfy two key...
research
04/04/2019

To Stir or Not to Stir: Online Estimation of Liquid Properties for Pouring Actions

Our brains are able to exploit coarse physical models of fluids to solve...

Please sign up or login with your details

Forgot password? Click here to reset