Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

09/21/2020
by   Wenliang Dai, et al.
0

Despite the recent achievements made in the multi-modal emotion recognition task, two problems still exist and have not been well investigated: 1) the relationship between different emotion categories are not utilized, which leads to sub-optimal performance; and 2) current models fail to cope well with low-resource emotions, especially for unseen emotions. In this paper, we propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues. We use pre-trained word embeddings to represent emotion categories for textual data. Then, two mapping functions are learned to transfer these embeddings into visual and acoustic spaces. For each modality, the model calculates the representation distance between the input sequence and target emotions and makes predictions based on the distances. By doing so, our model can directly adapt to the unseen emotions in any modality since we have their pre-trained embeddings and modality mapping functions. Experiments show that our model achieves state-of-the-art performance on most of the emotion categories. In addition, our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.

READ FULL TEXT
research
07/31/2021

Using Knowledge-Embedded Attention to Augment Pre-trained Language Models for Fine-Grained Emotion Recognition

Modern emotion recognition systems are trained to recognize only a small...
research
10/31/2022

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

The need for emotional inference from text continues to diversify as mor...
research
10/11/2021

Cross Domain Emotion Recognition using Few Shot Knowledge Transfer

Emotion recognition from text is a challenging task due to diverse emoti...
research
05/05/2022

M2R2: Missing-Modality Robust emotion Recognition framework with iterative data augmentation

This paper deals with the utterance-level modalities missing problem wit...
research
07/11/2022

Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion Recognition

The research and applications of multimodal emotion recognition have bec...
research
09/18/2020

Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders

We present a novel generalized zero-shot algorithm to recognize perceive...
research
03/24/2023

Decoupled Multimodal Distilling for Emotion Recognition

Human multimodal emotion recognition (MER) aims to perceive human emotio...

Please sign up or login with your details

Forgot password? Click here to reset