Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals

06/05/2023
by   Puneet Kumar, et al.
0

This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset