Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities

10/27/2022
by   Haolin Zuo, et al.
0

Multimodal emotion recognition leverages complementary information across modalities to gain performance. However, we cannot guarantee that the data of all modalities are always present in practice. In the studies to predict the missing data across modalities, the inherent difference between heterogeneous modalities, namely the modality gap, presents a challenge. To address this, we propose to use invariant features for a missing modality imagination network (IF-MMIN) which includes two novel mechanisms: 1) an invariant feature learning strategy that is based on the central moment discrepancy (CMD) distance under the full-modality scenario; 2) an invariant feature based imagination module (IF-IM) to alleviate the modality gap during the missing modalities prediction, thus improving the robustness of multimodal joint representation. Comprehensive experiments on the benchmark dataset IEMOCAP demonstrate that the proposed model outperforms all baselines and invariantly improves the overall emotion recognition performance under uncertain missing-modality conditions. We release the code at: https://github.com/ZhuoYulang/IF-MMIN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2022

M2R2: Missing-Modality Robust emotion Recognition framework with iterative data augmentation

This paper deals with the utterance-level modalities missing problem wit...
research
01/26/2022

Self-attention fusion for audiovisual emotion recognition with incomplete data

In this paper, we consider the problem of multimodal data analysis with ...
research
03/24/2023

Decoupled Multimodal Distilling for Emotion Recognition

Human multimodal emotion recognition (MER) aims to perceive human emotio...
research
04/28/2023

SGED: A Benchmark dataset for Performance Evaluation of Spiking Gesture Emotion Recognition

In the field of affective computing, researchers in the community have p...
research
04/18/2023

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

Over the past few decades, multimodal emotion recognition has made remar...
research
07/13/2020

Low to High Dimensional Modality Hallucination using Aggregated Fields of View

Real-world robotics systems deal with data from a multitude of modalitie...

Please sign up or login with your details

Forgot password? Click here to reset