Multimodal Emotion Recognition Using Deep Canonical Correlation Analysis

08/13/2019
by   Wei Liu, et al.
6

Multimodal signals are more powerful than unimodal data for emotion recognition since they can represent emotions more comprehensively. In this paper, we introduce deep canonical correlation analysis (DCCA) to multimodal emotion recognition. The basic idea behind DCCA is to transform each modality separately and coordinate different modalities into a hyperspace by using specified canonical correlation analysis constraints. We evaluate the performance of DCCA on five multimodal datasets: the SEED, SEED-IV, SEED-V, DEAP, and DREAMER datasets. Our experimental results demonstrate that DCCA achieves state-of-the-art recognition accuracy rates on all five datasets: 94.58 for two binary classification tasks and 88.51 classification task on the DEAP dataset, 83.08 88.99 dataset. We also compare the noise robustness of DCCA with that of existing methods when adding various amounts of noise to the SEED-V dataset. The experimental results indicate that DCCA has greater robustness. By visualizing feature distributions with t-SNE and calculating the mutual information between different modalities before and after using DCCA, we find that the features transformed by DCCA from different modalities are more homogeneous and discriminative across emotions.

READ FULL TEXT

page 1

page 9

page 13

page 15

research
04/04/2020

Investigating EEG-Based Functional Connectivity Patterns for Multimodal Emotion Recognition

Compared with the rich studies on the motor brain-computer interface (BC...
research
05/04/2023

Noise-Resistant Multimodal Transformer for Emotion Recognition

Multimodal emotion recognition identifies human emotions from various da...
research
02/28/2021

Discriminative Multiple Canonical Correlation Analysis for Information Fusion

In this paper, we propose the Discriminative Multiple Canonical Correlat...
research
02/16/2022

Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models

Automatic emotion recognition plays a key role in computer-human interac...
research
06/12/2022

COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition

Automatically recognising apparent emotions from face and voice is hard,...
research
03/24/2023

Decoupled Multimodal Distilling for Emotion Recognition

Human multimodal emotion recognition (MER) aims to perceive human emotio...
research
02/26/2016

Multimodal Emotion Recognition Using Multimodal Deep Learning

To enhance the performance of affective models and reduce the cost of ac...

Please sign up or login with your details

Forgot password? Click here to reset