Communicating Robot Conventions through Shared Autonomy

02/22/2022
by   Ananth Jonnavittula, et al.
0

When humans control robot arms these robots often need to infer the human's desired task. Prior research on assistive teleoperation and shared autonomy explores how robots can determine the desired task based on the human's joystick inputs. In order to perform this inference the robot relies on an internal mapping between joystick inputs and discrete tasks: e.g., pressing the joystick left indicates that the human wants a plate, while pressing the joystick right indicates a cup. This approach works well after the human understands how the robot interprets their inputs – but inexperienced users still have to learn these mappings through trial and error! Here we recognize that the robot's mapping between tasks and inputs is a convention. There are multiple, equally efficient conventions that the robot could use: rather than passively waiting for the human, we introduce a shared autonomy approach where the robot actively reveals its chosen convention. Across repeated interactions the robot intervenes and exaggerates the arm's motion to demonstrate more efficient inputs while also assisting for the current task. We compare this approach to a state-of-the-art baseline – where users must identify the convention by themselves – as well as written instructions. Our user study results indicate that modifying the robot's behavior to reveal its convention outperforms the baselines and reduces the amount of time that humans spend controlling the robot. See videos of our user study here: https://youtu.be/jROTVOp469I

READ FULL TEXT

page 1

page 5

page 6

research
07/20/2021

Learning to Share Autonomy Across Repeated Interaction

Wheelchair-mounted robotic arms (and other assistive robots) should help...
research
07/22/2020

Learning User-Preferred Mappings for Intuitive Robot Control

When humans control drones, cars, and robots, we often have some preconc...
research
05/19/2022

Learning to Share Autonomy from Repeated Human-Robot Interaction

Assistive robot arms try to help their users perform everyday tasks. One...
research
11/11/2020

I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set

Assistive robots have the potential to help people perform everyday task...
research
09/03/2021

Communicating Inferred Goals with Passive Augmented Reality and Active Haptic Feedback

Robots learn as they interact with humans. Consider a human teleoperatin...
research
11/04/2020

A Comparison Between Joint Space and Task Space Mappings for Dynamic Teleoperation of an Anthropomorphic Robotic Arm in Reaction Tests

Teleoperation (i.e., controlling a robot with human motion) proves promi...
research
09/22/2020

Self-Adapting Variable Impedance Actuator Control for Precision and Dynamic Tasks

Variable impedance actuators (VIAs) as tool devices for teleoperation co...

Please sign up or login with your details

Forgot password? Click here to reset