Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild

06/06/2019
by   Jie Cai, et al.
0

Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel, and the OpenSmile toolkit to extract features from the audio channel. Two kinds of fusion methods, i,e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve better or at least comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.

READ FULL TEXT
research
03/15/2023

Continuous emotion recognition based on TCN and Transformer

Human emotion recognition plays an important role in human-computer inte...
research
12/18/2021

Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase Classification Using EEG

Biomedical decision making involves multiple signal processing, either f...
research
12/27/2020

Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition

The audio-video based emotion recognition aims to classify a given video...
research
10/03/2019

Exploiting multi-CNN features in CNN-RNN based Dimensional Emotion Recognition on the OMG in-the-wild Dataset

This paper presents a novel CNN-RNN based approach, which exploits multi...
research
04/22/2021

Neuro-inspired edge feature fusion using Choquet integrals

It is known that the human visual system performs a hierarchical informa...
research
06/25/2019

Emotion Recognition Using Fusion of Audio and Video Features

In this paper we propose a fusion approach to continuous emotion recogni...
research
10/13/2021

Multistage linguistic conditioning of convolutional layers for speech emotion recognition

In this contribution, we investigate the effectiveness of deep fusion of...

Please sign up or login with your details

Forgot password? Click here to reset