Continuous ErrP detections during multimodal human-robot interaction

07/25/2022
by   Su Kyoung Kim, et al.
0

Human-in-the-loop approaches are of great importance for robot applications. In the presented study, we implemented a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures. The robot announces its intention verbally and selects the appropriate action using pointing gestures. The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot. For cases where the verbal announcement of the robot does not match the corresponding action choice of the robot, we expect error-related potentials (ErrPs) in the human electroencephalogram (EEG). These intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously. For feature selection, we propose an approach that allows the combinations of forward and backward sliding windows to train a classifier. We achieved an average classification performance of 91 across 9 subjects. As expected, we also observed a relatively high variability between the subjects. In the future, the proposed feature selection approach will be extended to allow for customization of feature selection. To this end, the best combinations of forward and backward sliding windows will be automatically selected to account for inter-subject variability in classification performance. In addition, we plan to use the intrinsic human error evaluation evident in the error case by the ErrP in interactive reinforcement learning to improve multimodal human-robot interaction.

READ FULL TEXT

page 1

page 2

research
10/25/2017

Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction

This paper presents a method for constructing human-robot interaction po...
research
03/14/2023

Enable Natural Tactile Interaction for Robot Dog based on Large-format Distributed Flexible Pressure Sensors

Touch is an important channel for human-robot interaction, while it is c...
research
08/30/2019

On Laughter and Speech-Laugh, Based on Observations of Child-Robot Interaction

In this article, we study laughter found in child-robot interaction wher...
research
07/04/2019

Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

Assistive robots can potentially improve the quality of life and persona...
research
12/13/2019

Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach

Whenever we are addressing a specific object or refer to a certain spati...
research
05/22/2020

Feature selection for gesture recognition in Internet-of-Things for healthcare

Internet of Things is rapidly spreading across several fields, including...
research
09/18/2019

Multimodal Continuation-style Architectures for Human-Robot Interaction

We present an architecture for integrating real-time, multimodal input i...

Please sign up or login with your details

Forgot password? Click here to reset