Emotion Understanding in Videos Through Body, Context, and Visual-Semantic Embedding Loss

We present our winning submission to the First International Workshop on Bodily Expressed Emotion Understanding (BEEU) challenge. Based on recent literature on the effect of context/environment on emotion, as well as visual representations with semantic meaning using word embeddings, we extend the framework of Temporal Segment Network to accommodate these. Our method is verified on the validation set of the Body Language Dataset (BoLD) and achieves 0.26235 Emotion Recognition Score on the test set, surpassing the previous best result of 0.2530.

READ FULL TEXT
research
05/03/2018

A Multi-component CNN-RNN Approach for Dimensional Emotion Recognition in-the-wild

This paper presents our approach to the One-Minute Gradual-Emotion Recog...
research
10/30/2020

Pose-based Body Language Recognition for Emotion and Psychiatric Symptom Interpretation

Inspired by the human ability to infer emotions from body language, we p...
research
04/24/2018

DeepEmo: Learning and Enriching Pattern-Based Emotion Representations

We propose a graph-based mechanism to extract rich-emotion bearing patte...
research
11/13/2017

Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video

In this paper we describe a solution to our entry for the emotion recogn...
research
08/01/2023

Using Scene and Semantic Features for Multi-modal Emotion Recognition

Automatic emotion recognition is a hot topic with a wide range of applic...
research
08/05/2021

Evolution of emotion semantics

Humans possess the unique ability to communicate emotions through langua...
research
06/03/2021

Less is More: Sparse Sampling for Dense Reaction Predictions

Obtaining viewer responses from videos can be useful for creators and st...

Please sign up or login with your details

Forgot password? Click here to reset