Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

10/12/2017
by   Thomas Guntz, et al.
0

In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93 chess expertise while unimodal approach reaches 86 validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving.

READ FULL TEXT
research
10/17/2018

The Role of Emotion in Problem Solving: First Results from Observing Chess

In this paper we present results from recent experiments that suggest th...
research
02/20/2023

Knowledge-aware Bayesian Co-attention for Multimodal Emotion Recognition

Multimodal emotion recognition is a challenging research area that aims ...
research
06/22/2021

Key-Sparse Transformer with Cascaded Cross-Attention Block for Multimodal Speech Emotion Recognition

Speech emotion recognition is a challenging and important research topic...
research
08/23/2023

Multimodal Latent Emotion Recognition from Micro-expression and Physiological Signals

This paper discusses the benefits of incorporating multimodal data for i...
research
12/20/2022

InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis

Humans are sophisticated at reading interlocutors' emotions from multimo...
research
02/26/2016

Multimodal Emotion Recognition Using Multimodal Deep Learning

To enhance the performance of affective models and reduce the cost of ac...
research
09/28/2011

Cognitive Principles in Robust Multimodal Interpretation

Multimodal conversational interfaces provide a natural means for users t...

Please sign up or login with your details

Forgot password? Click here to reset