FaceRNET: a Facial Expression Intensity Estimation Network

03/01/2023
by   Dimitrios Kollias, et al.
0

This paper presents our approach for Facial Expression Intensity Estimation from videos. It includes two components: i) a representation extractor network that extracts various emotion descriptors (valence-arousal, action units and basic expressions) from each videoframe; ii) a RNN that captures temporal information in the data, followed by a mask layer which enables handling varying input video lengths through dynamic routing. This approach has been tested on the Hume-Reaction dataset yielding excellent results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2021

FEAFA+: An Extended Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

Nearly all existing Facial Action Coding System-based datasets that incl...
research
03/16/2023

Human Reaction Intensity Estimation with Ensemble of Multi-task Networks

Facial expression in-the-wild is essential for various interactive compu...
research
04/04/2019

Inferring Dynamic Representations of Facial Actions from a Still Image

Facial actions are spatio-temporal signals by nature, and therefore thei...
research
06/18/2018

Deep Spatiotemporal Representation of the Face for Automatic Pain Intensity Estimation

Automatic pain intensity assessment has a high value in disease diagnosi...
research
08/24/2017

Objective Classes for Micro-Facial Expression Recognition

Micro-expressions are brief spontaneous facial expressions that appear o...
research
06/16/2021

Temporal Convolution Networks with Positional Encoding for Evoked Expression Estimation

This paper presents an approach for Evoked Expressions from Videos (EEV)...
research
03/24/2021

Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition

Temporal context is key to the recognition of expressions of emotion. Ex...

Please sign up or login with your details

Forgot password? Click here to reset