Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction

10/25/2017
by   Edward Schmerling, et al.
0

This paper presents a method for constructing human-robot interaction policies in settings where multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. We are motivated in this work by the example of traffic weaving, e.g., at highway on-ramps/off-ramps, where entering and exiting cars must swap lanes in a short distance---a challenging negotiation even for experienced drivers due to the inherent multimodal uncertainty of who will pass whom. Our approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment model through massively parallel sampling of human responses to candidate robot action sequences. Direct learning of these distributions is made possible by recent advances in the theory of conditional variational autoencoders (CVAEs), whereby we learn action distributions simultaneously conditioned on the present interaction history, as well as candidate future robot actions in order to take into account response dynamics. We demonstrate the efficacy of this approach with a human-in-the-loop simulation of a traffic weaving scenario.

READ FULL TEXT
research
03/06/2018

Generative Modeling of Multimodal Multi-Human Behavior

This work presents a methodology for modeling and predicting human behav...
research
07/25/2022

Continuous ErrP detections during multimodal human-robot interaction

Human-in-the-loop approaches are of great importance for robot applicati...
research
12/14/2022

Learning and Predicting Multimodal Vehicle Action Distributions in a Unified Probabilistic Model Without Labels

We present a unified probabilistic model that learns a representative se...
research
08/10/2020

Multimodal Deep Generative Models for Trajectory Prediction: A Conditional Variational Autoencoder Approach

Human behavior prediction models enable robots to anticipate how humans ...
research
12/17/2022

iCub! Do you recognize what I am doing?: multimodal human action recognition on multisensory-enabled iCub robot

This study uses multisensory data (i.e., color and depth) to recognize h...
research
10/19/2012

Marginalizing Out Future Passengers in Group Elevator Control

Group elevator scheduling is an NP-hard sequential decision-making probl...
research
07/15/2016

Intrinsically Motivated Multimodal Structure Learning

We present a long-term intrinsically motivated structure learning method...

Please sign up or login with your details

Forgot password? Click here to reset