Labeling the Phrase Set of the Conversation Agent, Rinna

10/13/2020
by   Naoki Wake, et al.
0

Mapping spoken text to gestures is an important research area for robots with conversation capability. However, mapping a gesture to every spoken text a priori is impossible, especially when a response is automatically generated by a conversation agent. Knowledge of human gesture characteristics can be used to map text to some semantic space where texts with similar meanings are clustered together; then, a mapped gesture is defined for each semantic cluster (i.e., concept). Here, we discuss the practical issues of obtaining concepts for the conversation agent Rinna, which has a personalized vocabulary such as short terms. We compared the concepts obtained automatically with a natural language processing approach and manually with a sociological approach, and we identified three limitations of the former: at the semantic level with emoji and symbols; at the semantic level with slang, new words, and buzzwords; and at the pragmatic level. We consider these problems to be due to the personalized vocabulary of Rinna. To solve these issues, we propose combining manual and autogenerated approaches to map texts to a semantic space. A follow-up experiment showed that a robot gesture selected based on concepts left a better impression than a randomly selected gesture, which suggests the feasibility of applying semantic space to text-to-gesture mapping. The present work contributes insights into developing a methodology for generating gestures of a conversation agent with a personalized vocabulary.

READ FULL TEXT
research
10/13/2022

Deep Gesture Generation for Social Robots Using Type-Specific Libraries

Body language such as conversational gesture is a powerful way to ease c...
research
03/23/2023

GesGPT: Speech Gesture Synthesis With Text Parsing from GPT

Gesture synthesis has gained significant attention as a critical researc...
research
03/04/2021

It's A Match! Gesture Generation Using Expressive Parameter Matching

Automatic gesture generation from speech generally relies on implicit mo...
research
05/17/2001

Toward Natural Gesture/Speech Control of a Large Display

In recent years because of the advances in computer vision research, fre...
research
03/01/2021

GestureMap: Supporting Visual Analytics and Quantitative Analysis of Motion Elicitation Data by Learning 2D Embeddings

This paper presents GestureMap, a visual analytics tool for gesture elic...
research
12/04/2018

A Face-to-Face Neural Conversation Model

Neural networks have recently become good at engaging in dialog. However...
research
03/02/2018

Gesture-based Piloting of an Aerial Robot using Monocular Vision

Aerial robots are becoming popular among general public, and with the de...

Please sign up or login with your details

Forgot password? Click here to reset