Learning to gesticulate by observation using a deep generative approach

09/04/2019
by   Unai Zabala, et al.
0

The goal of the system presented in this paper is to develop a natural talking gesture generation behavior for a humanoid robot, by feeding a Generative Adversarial Network (GAN) with human talking gestures recorded by a Kinect. A direct kinematic approach is used to translate from human poses to robot joint positions. The provided videos show that the robot is able to use a wide variety of gestures, offering a non-dreary, natural expression level.

READ FULL TEXT
research
10/22/2020

Quantitative analysis of robot gesticulation behavior

Social robot capabilities, such as talking gestures, are best produced u...
research
05/02/2023

AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis

The generation of realistic and contextually relevant co-speech gestures...
research
03/08/2023

Communicating human intent to a robotic companion by multi-type gesture sentences

Human-Robot collaboration in home and industrial workspaces is on the ri...
research
02/20/2022

In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures

Hugs are complex affective interactions that often include gestures like...
research
07/31/2021

Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning

We present a generative adversarial network to synthesize 3D pose sequen...
research
03/04/2020

Learning,Generating and Adapting Wave Gestures for Expressive Human-Robot Interaction

This study proposes a novel imitation learning approach for the stochast...
research
07/09/2019

Influence of Pointing on Learning to Count: A Neuro-Robotics Model

In this paper a neuro-robotics model capable of counting using gestures ...

Please sign up or login with your details

Forgot password? Click here to reset