AttX: Attentive Cross-Connections for Fusion of Wearable Signals in Emotion Recognition

06/09/2022
by   Anubhav Bhatti, et al.
0

We propose cross-modal attentive connections, a new dynamic and effective technique for multimodal representation learning from wearable data. Our solution can be integrated into any stage of the pipeline, i.e., after any convolutional layer or block, to create intermediate connections between individual streams responsible for processing each modality. Additionally, our method benefits from two properties. First, it can share information uni-directionally (from one modality to the other) or bi-directionally. Second, it can be integrated into multiple stages at the same time to further allow network gradients to be exchanged in several touch-points. We perform extensive experiments on three public multimodal wearable datasets, WESAD, SWELL-KW, and CASE, and demonstrate that our method can effectively regulate and share information between different modalities to learn better representations. Our experiments further demonstrate that once integrated into simple CNN-based multimodal solutions (2, 3, or 4 modalities), our method can result in superior or competitive performance to state-of-the-art and outperform a variety of baseline uni-modal and classical multimodal methods.

READ FULL TEXT

page 1

page 9

research
08/04/2021

Attentive Cross-modal Connections for Deep Multimodal Wearable-based Emotion Recognition

Classification of human emotions can play an essential role in the desig...
research
08/12/2018

Multimodal Language Analysis with Recurrent Multistage Fusion

Computational modeling of human multimodal language is an emerging resea...
research
07/28/2023

CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition

Multimodal Emotion Recognition in Conversation (ERC) has garnered growin...
research
10/13/2020

Jointly Optimizing Sensing Pipelines for Multimodal Mixed Reality Interaction

Natural human interactions for Mixed Reality Applications are overwhelmi...
research
03/29/2023

Transformer-based Self-supervised Multimodal Representation Learning for Wearable Emotion Recognition

Recently, wearable emotion recognition based on peripheral physiological...
research
12/16/2022

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis

Humans are skilled in reading the interlocutor's emotion from multimodal...
research
11/19/2019

Modal-aware Features for Multimodal Hashing

Many retrieval applications can benefit from multiple modalities, e.g., ...

Please sign up or login with your details

Forgot password? Click here to reset