Continuous Multimodal Emotion Recognition Approach for AVEC 2017

09/18/2017
by   Narotam Singh, et al.
0

This paper reports the analysis of audio and visual features in predicting the continuous emotion dimensions under the seventh Audio/Visual Emotion Challenge (AVEC 2017), which was done as part of a B.Tech. 2nd year internship project. For visual features we used the HOG (Histogram of Gradients) features, Fisher encodings of SIFT (Scale-Invariant Feature Transform) features based on Gaussian mixture model (GMM) and some pretrained Convolutional Neural Network layers as features; all these extracted for each video clip. For audio features we used the Bag-of-audio-words (BoAW) representation of the LLDs (low-level descriptors) generated by openXBOW provided by the organisers of the event. Then we trained fully connected neural network regression model on the dataset for all these different modalities. We applied multimodal fusion on the output models to get the Concordance correlation coefficient on Development set as well as Test set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2017

Depression Scale Recognition from Audio, Visual and Text Analysis

Depression is a major mental health disorder that is rapidly affecting l...
research
09/16/2019

Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

The goal of this study is to develop and analyze multimodal models for p...
research
04/24/2015

Cultural Event Recognition with Visual ConvNets and Temporal Models

This paper presents our contribution to the ChaLearn Challenge 2015 on C...
research
11/10/2017

Depression Severity Estimation from Multiple Modalities

Depression is a major debilitating disorder which can affect people from...
research
07/06/2019

Bag-of-Audio-Words based on Autoencoder Codebook for Continuous Emotion Prediction

In this paper we present a novel approach for extracting a Bag-of-Words ...
research
10/10/2018

A Multimodal Approach towards Emotion Recognition of Music using Audio and Lyrical Content

We propose MoodNet - A Deep Convolutional Neural Network based architect...
research
09/21/2017

Temporal Multimodal Fusion for Video Emotion Classification in the Wild

This paper addresses the question of emotion classification. The task co...

Please sign up or login with your details

Forgot password? Click here to reset