Deep Gesture Generation for Social Robots Using Type-Specific Libraries

10/13/2022
by   Hitoshi Teshima, et al.
0

Body language such as conversational gesture is a powerful way to ease communication. Conversational gestures do not only make a speech more lively but also contain semantic meaning that helps to stress important information in the discussion. In the field of robotics, giving conversational agents (humanoid robots or virtual avatars) the ability to properly use gestures is critical, yet remain a task of extraordinary difficulty. This is because given only a text as input, there are many possibilities and ambiguities to generate an appropriate gesture. Different to previous works we propose a new method that explicitly takes into account the gesture types to reduce these ambiguities and generate human-like conversational gestures. Key to our proposed system is a new gesture database built on the TED dataset that allows us to map a word to one of three types of gestures: "Imagistic" gestures, which express the content of the speech, "Beat" gestures, which emphasize words, and "No gestures." We propose a system that first maps the words in the input text to their corresponding gesture type, generate type-specific gestures and combine the generated gestures into one final smooth gesture. In our comparative experiments, the effectiveness of the proposed method was confirmed in user studies for both avatar and humanoid robot.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
05/21/2019

Design of conversational humanoid robot based on hardware independent gesture generation

With an increasing need for elderly and disability care, there is an inc...
research
05/25/2023

MPE4G: Multimodal Pretrained Encoder for Co-Speech Gesture Generation

When virtual agents interact with humans, gestures are crucial to delive...
research
07/13/2023

Augmented Co-Speech Gesture Generation: Including Form and Meaning Features to Guide Learning-Based Gesture Synthesis

Due to their significance in human communication, the automatic generati...
research
10/13/2020

Labeling the Phrase Set of the Conversation Agent, Rinna

Mapping spoken text to gestures is an important research area for robots...
research
11/18/2022

3d human motion generation from the text via gesture action classification and the autoregressive model

In this paper, a deep learning-based model for 3D human motion generatio...
research
07/09/2019

Influence of Pointing on Learning to Count: A Neuro-Robotics Model

In this paper a neuro-robotics model capable of counting using gestures ...
research
06/02/2023

EdGCon: Auto-assigner of Iconicity Ratings Grounded by Lexical Properties to Aid in Generation of Technical Gestures

Gestures that share similarities in their forms and are related in their...

Please sign up or login with your details

Forgot password? Click here to reset