Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

06/08/2017
by   Jekaterina Novikova, et al.
0

Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users' impressions of the robot after a conversation. We find that happiness in the user's recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and interpretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learning based adaptive human-robot dialogue systems.

READ FULL TEXT
research
01/26/2022

Artificial Emotional Intelligence in Socially Assistive Robots for Older Adults: A Pilot Study

This paper presents our recent research on integrating artificial emotio...
research
03/24/2023

The Robot in the Room: Influence of Robot Facial Expressions and Gaze on Human-Human-Robot Collaboration

Robot facial expressions and gaze are important factors for enhancing hu...
research
05/24/2019

Bridging Dialogue Generation and Facial Expression Synthesis

Spoken dialogue systems that assist users to solve complex tasks such as...
research
03/01/2023

I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue

Current Spoken Dialogue Systems (SDSs) often serve as passive listeners ...
research
08/29/2023

Sequential annotations for naturally-occurring HRI: first insights

We explain the methodology we developed for improving the interactions a...
research
12/08/2020

Emotive Response to a Hybrid-Face Robot and Translation to Consumer Social Robots

We introduce the conceptual formulation, design, fabrication, control an...
research
09/06/2023

Feeding the Coffee Habit: A Longitudinal Study of a Robo-Barista

Studying Human-Robot Interaction over time can provide insights into wha...

Please sign up or login with your details

Forgot password? Click here to reset