A Multi-Task Learning & Generation Framework: Valence-Arousal, Action Units & Primary Expressions

11/11/2018
by   Dimitrios Kollias, et al.
0

Over the past few years many research efforts have been devoted to the field of affect analysis. Various approaches have been proposed for: i) discrete emotion recognition in terms of the primary facial expressions; ii) emotion analysis in terms of facial Action Units (AUs), assuming a fixed expression intensity; iii) dimensional emotion analysis, in terms of valence and arousal (VA). These approaches can only be effective, if they are developed using large, appropriately annotated databases, showing behaviors of people in-the-wild, i.e., in uncontrolled environments. Aff-Wild has been the first, large-scale, in-the-wild database (including around 1,200,000 frames of 300 videos), annotated in terms of VA. In the vast majority of existing emotion databases, their annotation is limited to either primary expressions, or valence-arousal, or action units. In this paper, we first annotate a part (around 234,000 frames) of the Aff-Wild database in terms of 8 AUs and another part (around 288,000 frames) in terms of the 7 basic emotion categories, so that parts of this database are annotated in terms of VA, as well as AUs, or primary expressions. Then, we set up and tackle multi-task learning for emotion recognition, as well as for facial image generation. Multi-task learning is performed using: i) a deep neural network with shared hidden layers, which learns emotional attributes by exploiting their inter-dependencies; ii) a discriminator of a generative adversarial network (GAN). On the other hand, image generation is implemented through the generator of the GAN. For these two tasks, we carefully design loss functions that fit the examined set-up. Experiments are presented which illustrate the good performance of the proposed approach when applied to the new annotated parts of the Aff-Wild database.

READ FULL TEXT

page 1

page 7

research
09/25/2019

Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace

Affective computing has been largely limited in terms of available data ...
research
08/12/2020

RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion Judgement and Objective AU Annotations

Much of the work on automatic facial expression recognition relies on da...
research
03/29/2021

Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework

Affect recognition based on subjects' facial expressions has been a topi...
research
11/11/2018

Aff-Wild2: Extending the Aff-Wild Database for Affect Recognition

Automatic understanding of human affect using visual signals is a proble...
research
02/19/2018

Multi-task, multi-label and multi-domain learning with residual convolutional networks for emotion recognition

Automated emotion recognition in the wild from facial images remains a c...
research
07/09/2021

Emotion Recognition with Incomplete Labels Using Modified Multi-task Learning Technique

The task of predicting affective information in the wild such as seven b...
research
01/09/2019

SEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the Wild

Natural human-computer interaction and audio-visual human behaviour sens...

Please sign up or login with your details

Forgot password? Click here to reset