Imitating by generating: deep generative models for imitation of interactive tasks

10/14/2019
by   Judith Butepage, et al.
17

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner and (3) generation of robot joint trajectories matching the human motion. To test these ideas, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake", "hand-wave", "parachute fist-bump" and "rocket fist-bump". We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

READ FULL TEXT

page 2

page 10

page 11

research
04/14/2018

Intrinsically motivated reinforcement learning for human-robot interaction in the real-world

For a natural social human-robot interaction, it is essential for a robo...
research
08/01/2022

Safe and Efficient Exploration of Human Models During Human-Robot Interaction

Many collaborative human-robot tasks require the robot to stay safe and ...
research
12/09/2020

Toward an Affective Touch Robot: Subjective and Physiological Evaluation of Gentle Stroke Motion Using a Human-Imitation Hand

Affective touch offers positive psychological and physiological benefits...
research
08/06/2019

Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents

The ability to generate appropriate verbal and non-verbal backchannels b...
research
10/04/2022

SLOT-V: Supervised Learning of Observer Models for Legible Robot Motion Planning in Manipulation

We present SLOT-V, a novel supervised learning framework that learns obs...
research
02/01/2023

Shutter, the Robot Photographer: Leveraging Behavior Trees for Public, In-the-Wild Human-Robot Interactions

Deploying interactive systems in-the-wild requires adaptability to situa...

Please sign up or login with your details

Forgot password? Click here to reset