Multi-modal Affect Analysis using standardized data within subjects in the Wild

07/07/2021
by   Sachihiro Youoku, et al.
0

Human affective recognition is an important factor in human-computer interaction. However, the method development with in-the-wild data is not yet accurate enough for practical usage. In this paper, we introduce the affective recognition method focusing on facial expression (EXP) and valence-arousal calculation that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2021 Contest. When annotating facial expressions from a video, we thought that it would be judged not only from the features common to all people, but also from the relative changes in the time series of individuals. Therefore, after learning the common features for each frame, we constructed a facial expression estimation model and valence-arousal model using time-series data after combining the common features and the standardized features for each video. Furthermore, the above features were learned using multi-modal data such as image features, AU, Head pose, and Gaze. In the validation set, our model achieved a facial expression score of 0.546. These verification results reveal that our proposed framework can improve estimation accuracy and robustness effectively.

READ FULL TEXT
research
09/29/2020

A Multi-term and Multi-task Analyzing Framework for Affective Analysis in-the-wild

Human affective recognition is an important factor in human-computer int...
research
03/15/2023

Multi-Modal Facial Expression Recognition with Transformer-Based Fusion Networks and Dynamic Sampling

Facial expression recognition is important for various purpose such as e...
research
03/24/2022

An Ensemble Approach for Facial Expression Analysis in Video

Human emotions recognization contributes to the development of human-com...
research
03/09/2021

A Multi-resolution Approach to Expression Recognition in the Wild

Facial expressions play a fundamental role in human communication. Indee...
research
07/08/2021

Causal affect prediction model using a facial image sequence

Among human affective behavior research, facial expression recognition r...
research
03/23/2023

FER-former: Multi-modal Transformer for Facial Expression Recognition

The ever-increasing demands for intuitive interactions in Virtual Realit...
research
07/09/2021

A Multi-modal and Multi-task Learning Method for Action Unit and Expression Recognition

Analyzing human affect is vital for human-computer interaction systems. ...

Please sign up or login with your details

Forgot password? Click here to reset