Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

07/04/2019
by   Susanne Trick, et al.
3

Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be predicted. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers which combine multiple modalities outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
02/22/2021

HAIR: Head-mounted AR Intention Recognition

Human teams exhibit both implicit and explicit intention sharing. To fur...
research
06/03/2021

A Normative Model of Classifier Fusion

Combining the outputs of multiple classifiers or experts into a single p...
research
08/10/2023

CoBaIR: A Python Library for Context-Based Intention Recognition in Human-Robot-Interaction

Human-Robot Interaction (HRI) becomes more and more important in a world...
research
02/04/2019

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality

In collaborative tasks, people rely both on verbal and non-verbal cues s...
research
07/25/2022

Continuous ErrP detections during multimodal human-robot interaction

Human-in-the-loop approaches are of great importance for robot applicati...
research
03/12/2015

Starting engagement detection towards a companion robot using multimodal features

Recognition of intentions is a subconscious cognitive process vital to h...

Please sign up or login with your details

Forgot password? Click here to reset