Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks

05/18/2017
by   Matthias Plappert, et al.
0

Linking human whole-body motion and natural language is of great interest for the generation of semantic representations of observed human behaviors as well as for the generation of robot behaviors based on natural language input. While there has been a large body of research in this area, most approaches that exist today require a symbolic representation of motions (e.g. in the form of motion primitives), which have to be defined a-priori or require complex segmentation algorithms. In contrast, recent advances in the field of neural networks and especially deep learning have demonstrated that sub-symbolic representations that can be learned end-to-end usually outperform more traditional approaches, for applications such as machine translation. In this paper we propose a generative model that learns a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks (RNNs) and sequence-to-sequence learning. Our approach does not require any segmentation or manual feature engineering and learns a distributed representation, which is shared for all motions and descriptions. We evaluate our approach on 2,846 human whole-body motions and 6,187 natural language descriptions thereof from the KIT Motion-Language Dataset. Our results clearly demonstrate the effectiveness of the proposed model: We show that our model generates a wide variety of realistic motions only from descriptions thereof in form of a single sentence. Conversely, our model is also capable of generating correct and detailed natural language descriptions from human motions.

READ FULL TEXT
research
10/18/2022

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

Learning to generate diverse scene-aware and goal-oriented human motions...
research
07/13/2016

The KIT Motion-Language Dataset

Linking human motion and natural language is of great interest for the g...
research
09/04/2018

Text2Scene: Generating Abstract Scenes from Textual Descriptions

In this paper, we propose an end-to-end model that learns to interpret n...
research
12/24/2020

Towards Coordinated Robot Motions: End-to-End Learning of Motion Policies on Transform Trees

Robotic tasks often require generation of motions that satisfy multiple ...
research
03/23/2020

Caption Generation of Robot Behaviors based on Unsupervised Learning of Action Segments

Bridging robot action sequences and their natural language captions is a...
research
12/09/2020

Dynamical System Segmentation for Information Measures in Motion

Motions carry information about the underlying task being executed. Prev...
research
12/15/2014

Translating Videos to Natural Language Using Deep Recurrent Neural Networks

Solving the visual symbol grounding problem has long been a goal of arti...

Please sign up or login with your details

Forgot password? Click here to reset