Abstract: UMONS submission for the OMG-Emotion Challenge

05/03/2018
by   Delbrouck Jean-Benoit, et al.
0

This paper describes the UMONS solution for the OMG-Emotion Challenge. We explore a context-dependent architecture where the arousal and valence of an utterance are predicted according to its surrounding context (i.e. the preceding and following utterances of the video). We report an improvement when taking into account context for both unimodal and multimodal predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2018

Transformer for Emotion Recognition

This paper describes the UMONS solution for the OMG-Emotion Challenge. W...
research
05/03/2018

audEERING's approach to the One-Minute-Gradual Emotion Challenge

This paper describes audEERING's submissions as well as additional evalu...
research
05/03/2018

Multimodal Emotion Recognition for One-Minute-Gradual Emotion Challenge

The continuous dimensional emotion modelled by arousal and valence can d...
research
04/30/2018

OMG - Emotion Challenge Solution

This short paper describes our solution to the 2018 IEEE World Congress ...
research
08/28/2023

Video Multimodal Emotion Recognition System for Real World Applications

This paper proposes a system capable of recognizing a speaker's utteranc...
research
03/28/2019

A Multimodal Emotion Sensing Platform for Building Emotion-Aware Applications

Humans use a host of signals to infer the emotional state of others. In ...
research
02/13/2023

Emotion Detection in Unfix-length-Context Conversation

We leverage different context windows when predicting the emotion of dif...

Please sign up or login with your details

Forgot password? Click here to reset