FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

04/02/2019
by   Yanfu Yan, et al.
0

Facial expression analysis based on machine learning requires large number of well-annotated data to reflect different changes in facial motion. Publicly available datasets truly help to accelerate research in this area by providing a benchmark resource, but all of these datasets, to the best of our knowledge, are limited to rough annotations for action units, including only their absence, presence, or a five-level intensity according to the Facial Action Coding System. To meet the need for videos labeled in great detail, we present a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D Facial Animation. One hundred and twenty-two participants, including children, young adults and elderly people, were recorded in real-world conditions. In addition, 99,356 frames were manually labeled using Expression Quantitative Tool developed by us to quantify 9 symmetrical FACS action units, 10 asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action descriptors and 2 asymmetrical FACS action descriptors, and each action unit or action descriptor is well-annotated with a floating point number between 0 and 1. To provide a baseline for use in future research, a benchmark for the regression of action unit values based on Convolutional Neural Networks are presented. We also demonstrate the potential of our FEAFA dataset for 3D facial animation. Almost all state-of-the-art algorithms for facial animation are achieved based on 3D face reconstruction. We hence propose a novel method that drives virtual characters only based on action unit value regression of the 2D video frames of source actors.

READ FULL TEXT

page 2

page 4

page 8

research
11/04/2021

FEAFA+: An Extended Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

Nearly all existing Facial Action Coding System-based datasets that incl...
research
06/02/2016

Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis

Previous approaches to model and analyze facial expression analysis use ...
research
08/20/2020

Facial movement synergies and Action Unit detection from distal wearable Electromyography and Computer Vision

Distal facial Electromyography (EMG) can be used to detect smiles and fr...
research
11/12/2022

End-to-End Machine Learning Framework for Facial AU Detection in Intensive Care Units

Pain is a common occurrence among patients admitted to Intensive Care Un...
research
06/16/2015

Time Series Classification using the Hidden-Unit Logistic Model

We present a new model for time series classification, called the hidden...
research
01/11/2017

Linear Disentangled Representation Learning for Facial Actions

Limited annotated data available for the recognition of facial expressio...
research
02/17/2021

Automated Detection of Equine Facial Action Units

The recently developed Equine Facial Action Coding System (EquiFACS) pro...

Please sign up or login with your details

Forgot password? Click here to reset