Learning Social Affordance for Human-Robot Interaction

04/13/2016
by   Tianmin Shu, et al.
0

In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful interactive affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2017

Learning Social Affordance Grammar from Videos: Transferring Human Interactions to Human-Robot Interactions

In this paper, we present a general framework for learning social afford...
research
05/26/2020

Learning Whole-Body Human-Robot Haptic Interaction in Social Contexts

This paper presents a learning-from-demonstration (LfD) framework for te...
research
12/09/2016

Panoptic Studio: A Massively Multiview System for Social Interaction Capture

We present an approach to capture the 3D motion of a group of people eng...
research
10/22/2022

MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

Modeling interaction dynamics to generate robot trajectories that enable...
research
02/12/2015

Discovering Human Interactions in Videos with Limited Data Labeling

We present a novel approach for discovering human interactions in videos...
research
10/04/2012

Learning Human Activities and Object Affordances from RGB-D Videos

Understanding human activities and object affordances are two very impor...
research
01/18/2021

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos

We present an approach for physical imitation from human videos for robo...

Please sign up or login with your details

Forgot password? Click here to reset