Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data

08/25/2022
by   Puneet Kumar, et al.
0

This paper proposes a multimodal emotion recognition system based on hybrid fusion that classifies the emotions depicted by speech utterances and corresponding images into discrete classes. A new interpretability technique has been developed to identify the important speech image features leading to the prediction of particular emotion classes. The proposed system's architecture has been determined through intensive ablation studies. It fuses the speech image features and then combines speech, image, and intermediate fusion outputs. The proposed interpretability technique incorporates the divide conquer approach to compute shapely values denoting each speech image feature's importance. We have also constructed a large-scale dataset (IIT-R SIER dataset), consisting of speech utterances, corresponding images, and class labels, i.e., 'anger,' 'happy,' 'hate,' and 'sad.' The proposed system has achieved 83.29 the proposed system advocates the importance of utilizing complementary information from multiple modalities for emotion recognition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset