A trained humanoid robot can perform human-like crossmodal social attention conflict resolution

11/02/2021
by   Di Fu, et al.
0

Due to the COVID-19 pandemic, robots could be seen as potential resources in tasks like helping people work remotely, sustaining social distancing, and improving mental or physical health. To enhance human-robot interaction, it is essential for robots to become more socialised, via processing multiple social cues in a complex real-world environment. Our study adopted a neurorobotic paradigm of gaze-triggered audio-visual crossmodal integration to make an iCub robot express human-like social attention responses. At first, a behavioural experiment was conducted on 37 human participants. To improve ecological validity, a round-table meeting scenario with three masked animated avatars was designed with the middle one capable of performing gaze shift, and the other two capable of generating sound. The gaze direction and the sound location are either congruent or incongruent. Masks were used to cover all facial visual cues other than the avatars' eyes. We observed that the avatar's gaze could trigger crossmodal social attention with better human performance in the audio-visual congruent condition than in the incongruent condition. Then, our computational model, GASP, was trained to implement social cue detection, audio-visual saliency prediction, and selective attention. After finishing the model training, the iCub robot was exposed to similar laboratory conditions as human participants, demonstrating that it can replicate similar attention responses as humans regarding the congruency and incongruency performance, while overall the human performance was still superior. Therefore, this interdisciplinary work provides new insights on mechanisms of crossmodal social attention and how it can be modelled in robots in a complex environment.

READ FULL TEXT

page 7

page 11

page 19

page 20

research
02/28/2018

A Neurorobotic Experiment for Crossmodal Conflict Resolution in Complex Environments

Crossmodal conflict resolution is a crucial component of robot sensorimo...
research
08/24/2022

Judging by the Look: The Impact of Robot Gaze Strategies on Human Cooperation

Human eye gaze plays an important role in delivering information, commun...
research
11/18/2017

Neural Network Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

This paper introduces a novel neural network-based reinforcement learnin...
research
07/27/2022

iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning

Performing joint interaction requires constant mutual monitoring of own ...
research
10/15/2018

Assessing the Contribution of Semantic Congruency to Multisensory Integration and Conflict Resolution

The efficient integration of multisensory observations is a key property...
research
02/04/2019

When Exceptions are the Norm: Exploring the Role of Consent in HRI

HRI researchers have made major strides in developing robotic architectu...
research
08/31/2022

The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized

Rapid development of social robots stimulates active research in human m...

Please sign up or login with your details

Forgot password? Click here to reset