Learning Visually Guided Latent Actions for Assistive Teleoperation

05/02/2021
by   Siddharth Karamcheti, et al.
0

It is challenging for humans – particularly those living with physical disabilities – to control high-dimensional, dexterous robots. Prior work explores learning embedding functions that map a human's low-dimensional inputs (e.g., via a joystick) to complex, high-dimensional robot actions for assistive teleoperation; however, a central problem is that there are many more high-dimensional actions than available low-dimensional inputs. To extract the correct action and maximally assist their human controller, robots must reason over their context: for example, pressing a joystick down when interacting with a coffee cup indicates a different action than when interacting with knife. In this work, we develop assistive robots that condition their latent embeddings on visual inputs. We explore a spectrum of visual encoders and show that incorporating object detectors pretrained on small amounts of cheap, easy-to-collect structured data enables i) accurately and robustly recognizing the current context and ii) generalizing control embeddings to new objects and tasks. In user studies with a high-dimensional physical robot arm, participants leverage this approach to perform new tasks with unseen objects. Our results indicate that structured visual representations improve few-shot performance and are subjectively preferred by users.

READ FULL TEXT
research
09/20/2019

Controlling Assistive Robots with Learned Latent Actions

Assistive robots enable users with disabilities to perform everyday task...
research
07/06/2021

Learning Latent Actions to Control Assistive Robots

Assistive robot arms enable people with disabilities to conduct everyday...
research
08/03/2020

Action sequencing using visual permutations

Humans can easily reason about the sequence of high level actions needed...
research
05/07/2020

Shared Autonomy with Learned Latent Actions

Assistive robots enable people with disabilities to conduct everyday tas...
research
11/09/2021

Learning Perceptual Concepts by Bootstrapping from Human Queries

Robots need to be able to learn concepts from their users in order to ad...
research
05/26/2023

Structured Latent Variable Models for Articulated Object Interaction

In this paper, we investigate a scenario in which a robot learns a low-d...
research
07/22/2020

Learning User-Preferred Mappings for Intuitive Robot Control

When humans control drones, cars, and robots, we often have some preconc...

Please sign up or login with your details

Forgot password? Click here to reset