Transformer for Emotion Recognition

05/03/2018
by   Jean-Benoit Delbrouck, et al.
0

This paper describes the UMONS solution for the OMG-Emotion Challenge. We explore a context-dependent architecture where the arousal and valence of an utterance are predicted according to its surrounding context (i.e. the preceding and following utterances of the video). We report an improvement when taking into account context for both unimodal and multimodal predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2018

Abstract: UMONS submission for the OMG-Emotion Challenge

This paper describes the UMONS solution for the OMG-Emotion Challenge. W...
research
05/03/2018

audEERING's approach to the One-Minute-Gradual Emotion Challenge

This paper describes audEERING's submissions as well as additional evalu...
research
06/29/2020

A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis

Understanding expressed sentiment and emotions are two crucial factors i...
research
09/13/2021

Beyond Isolated Utterances: Conversational Emotion Recognition

Speech emotion recognition is the task of recognizing the speaker's emot...
research
08/28/2023

Video Multimodal Emotion Recognition System for Real World Applications

This paper proposes a system capable of recognizing a speaker's utteranc...
research
03/14/2020

EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle

We present EmotiCon, a learning-based algorithm for context-aware percei...
research
03/08/2022

Estimating the Uncertainty in Emotion Class Labels with Utterance-Specific Dirichlet Priors

Emotion recognition is a key attribute for artificial intelligence syste...

Please sign up or login with your details

Forgot password? Click here to reset