AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit

02/24/2022
by   Mina Bishay, et al.
0

In this paper we introduce AFFDEX 2.0 - a toolkit for analyzing facial expressions in the wild, that is, it is intended for users aiming to; a) estimate the 3D head pose, b) detect facial Action Units (AUs), c) recognize basic emotions and 2 new emotional states (sentimentality and confusion), and d) detect high-level expressive metrics like blink and attention. AFFDEX 2.0 models are mainly based on Deep Learning, and are trained using a large-scale naturalistic dataset consisting of thousands of participants from different demographic groups. AFFDEX 2.0 is an enhanced version of our previous toolkit [1], that is capable of tracking efficiently faces at more challenging conditions, detecting more accurately facial expressions, and recognizing new emotional states (sentimentality and confusion). AFFDEX 2.0 can process multiple faces in real time, and is working across the Windows and Linux platforms.

READ FULL TEXT

page 2

page 3

page 4

research
05/11/2016

Facial Expression Recognition from World Wild Web

Recognizing facial expression in a wild setting has remained a challengi...
research
08/20/2022

Finding Emotions in Faces: A Meta-Classifier

Machine learning has been used to recognize emotions in faces, typically...
research
08/18/2023

LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis

Facial expression analysis is an important tool for human-computer inter...
research
06/11/2022

Deep Learning Models for Automated Classification of Dog Emotional States from Facial Expressions

Similarly to humans, facial expressions in animals are closely linked wi...
research
11/21/2021

Customizing an Affective Tutoring System Based on Facial Expression and Head Pose Estimation

In recent years, the main problem in e-learning has shifted from analyzi...
research
09/11/2022

Automatic Detection of Sentimentality from Facial Expressions

Emotion recognition has received considerable attention from the Compute...
research
08/28/2023

ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment

The objective of stylized speech-driven facial animation is to create an...

Please sign up or login with your details

Forgot password? Click here to reset