EmoNets: Multimodal deep learning approaches for emotion recognition in video

03/05/2015
by   Samira Ebrahimi Kahou, et al.
0

The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 7

09/13/2018

Investigation of Multimodal Features, Classifiers and Fusion Methods for Emotion Recognition

Automatic emotion recognition is a challenging task. In this paper, we p...
05/03/2018

Framewise approach in multimodal emotion recognition in OMG challenge

In this report we described our approach achieves 53% of unweighted accu...
10/24/2019

AI in Pursuit of Happiness, Finding Only Sadness: Multi-Modal Facial Emotion Recognition Challenge

The importance of automated Facial Emotion Recognition (FER) grows the m...
10/07/2020

An Audio-Video Deep and Transfer Learning Framework for Multimodal Emotion Recognition in the wild

In this paper, we present our contribution to ABAW facial expression cha...
02/24/2016

How Deep Neural Networks Can Improve Emotion Recognition on Video Data

We consider the task of dimensional emotion recognition on video data us...
09/21/2017

Temporal Multimodal Fusion for Video Emotion Classification in the Wild

This paper addresses the question of emotion classification. The task co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.