A Efficient Multimodal Framework for Large Scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals

08/22/2020
by   Guanghao Yin, et al.
0

Considerable attention has been paid for physiological signal-based emotion recognition in field of affective computing. For the reliability and user friendly acquisition, Electrodermal Activity (EDA) has great advantage in practical applications. However, the EDA-based emotion recognition with hundreds of subjects still lacks effective solution. In this paper, our work makes an attempt to fuse the subject individual EDA features and the external evoked music features. And we propose an end-to-end multimodal framework, the 1-dimensional residual temporal and channel attention network (RTCAN-1D). For EDA features, the novel convex optimization-based EDA (CvxEDA) method is applied to decompose EDA signals into pahsic and tonic signals for mining the dynamic and steady features. The channel-temporal attention mechanism for EDA-based emotion recognition is firstly involved to improve the temporal- and channel-wise representation. For music features, we process the music signal with the open source toolkit openSMILE to obtain external feature vectors. The individual emotion features from EDA signals and external emotion benchmarks from music are fused in the classifing layers. We have conducted systematic comparisons on three multimodal datasets (PMEmo, DEAP, AMIGOS) for 2-classes valance/arousal emotion recognition. Our proposed RTCAN-1D outperforms the existing state-of-the-art models, which also validate that our work provides an reliable and efficient solution for large scale emotion recognition. Our code has been released at https://github.com/guanghaoyin/RTCAN-1D.

READ FULL TEXT
research
08/10/2019

User independent Emotion Recognition with Residual Signal-Image Network

User independent emotion recognition with large scale physiological sign...
research
11/30/2016

Fusion of EEG and Musical Features in Continuous Music-emotion Recognition

Emotion estimation in music listening is confronting challenges to captu...
research
04/12/2022

ADFF: Attention Based Deep Feature Fusion Approach for Music Emotion Recognition

Music emotion recognition (MER), a sub-task of music information retriev...
research
02/12/2020

An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos

Emotion recognition in user-generated videos plays an important role in ...
research
02/28/2018

Pop Music Highlighter: Marking the Emotion Keypoints

The goal of music highlight extraction is to get a short consecutive seg...
research
08/23/2023

Multimodal Latent Emotion Recognition from Micro-expression and Physiological Signals

This paper discusses the benefits of incorporating multimodal data for i...
research
02/20/2022

Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation

The study of Music Cognition and neural responses to music has been inva...

Please sign up or login with your details

Forgot password? Click here to reset