Toward Automated Generation of Affective Gestures from Text:A Theory-Driven Approach

03/04/2021
by   Micol Spitale, et al.
0

Communication in both human-human and human-robot interac-tion (HRI) contexts consists of verbal (speech-based) and non-verbal(facial expressions, eye gaze, gesture, body pose, etc.) components.The verbal component contains semantic and affective information;accordingly, HRI work on the gesture component so far has focusedon rule-based (mapping words to gestures) and data-driven (deep-learning) approaches to generating speech-paired gestures basedon either semantics or the affective state. Consequently, most ges-ture systems are confined to producing either semantically-linkedor affect-based gesticures. This paper introduces an approach forenabling human-robot communication based on a theory-drivenapproach to generate speech-paired robot gestures using both se-mantic and affective information. Our model takes as input textand sentiment analysis, and generates robot gestures in terms oftheir shape, intensity, and speed.

READ FULL TEXT

page 1

page 2

page 3

research
01/25/2020

Gesticulator: A framework for semantically-aware speech-driven gesture generation

During speech, people spontaneously gesticulate, which plays a key role ...
research
10/30/2018

Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots

Co-speech gestures enhance interaction experiences between humans as wel...
research
12/09/2018

Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction

A robot needs contextual awareness, effective speech production and comp...
research
08/04/2017

Speech-driven Animation with Meaningful Behaviors

Conversational agents (CAs) play an important role in human computer int...
research
03/04/2020

Learning,Generating and Adapting Wave Gestures for Expressive Human-Robot Interaction

This study proposes a novel imitation learning approach for the stochast...
research
01/31/2022

Beyond synchronization: Body gestures and gaze direction in duo performance

In this chapter, we focus on two main categories of visual interaction: ...
research
10/22/2020

Quantitative analysis of robot gesticulation behavior

Social robot capabilities, such as talking gestures, are best produced u...

Please sign up or login with your details

Forgot password? Click here to reset