MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

10/22/2022
by   Vignesh Prasad, et al.
0

Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI). Learning from Demonstration (LfD) methods from Human-Human Interactions (HHI) have shown promising results, especially when coupled with representation learning techniques. However, such methods for learning HRI either do not scale well to high dimensional data or cannot accurately adapt to changing via-poses of the interacting partner. We propose Multimodal Interactive Latent Dynamics (MILD), a method that couples deep representation learning and probabilistic machine learning to address the problem of two-party physical HRIs. We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE). Our experimental evaluations for learning HRI from HHI demonstrations show that MILD effectively captures the multimodality in the latent representations of HRI tasks, allowing us to decode the varying dynamics occurring in such tasks. Compared to related work, MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory. Notably, MILD can learn directly from camera-based pose estimations to generate trajectories, which we then map to a humanoid robot without the need for any additional training.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
08/10/2020

Multimodal Deep Generative Models for Trajectory Prediction: A Conditional Variational Autoencoder Approach

Human behavior prediction models enable robots to anticipate how humans ...
research
08/14/2019

Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks

Human-robot interaction benefits greatly from multimodal sensor inputs a...
research
03/06/2018

Generative Modeling of Multimodal Multi-Human Behavior

This work presents a methodology for modeling and predicting human behav...
research
08/29/2023

Sequential annotations for naturally-occurring HRI: first insights

We explain the methodology we developed for improving the interactions a...
research
04/13/2016

Learning Social Affordance for Human-Robot Interaction

In this paper, we present an approach for robot learning of social affor...
research
01/12/2019

Nonparametric Inverse Dynamic Models for Multimodal Interactive Robots

Direct design of a robot's rendered dynamics, such as in impedance contr...
research
04/16/2019

Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

Using touch devices to navigate in virtual 3D environments such as compu...

Please sign up or login with your details

Forgot password? Click here to reset