TransFER: Learning Relation-aware Facial Expression Representations with Transformers

08/25/2021
by   Fanglei Xue, et al.
0

Facial expression recognition (FER) has received increasing interest in computer vision. We propose the TransFER model which can learn rich relation-aware local representations. It mainly consists of three components: Multi-Attention Dropping (MAD), ViT-FER, and Multi-head Self-Attention Dropping (MSAD). First, local patches play an important role in distinguishing various expressions, however, few existing works can locate discriminative and diverse local patches. This can cause serious problems when some patches are invisible due to pose variations or viewpoint changes. To address this issue, the MAD is proposed to randomly drop an attention map. Consequently, models are pushed to explore diverse local patches adaptively. Second, to build rich relations between different local patches, the Vision Transformers (ViT) are used in FER, called ViT-FER. Since the global scope is used to reinforce each local patch, a better representation is obtained to boost the FER performance. Thirdly, the multi-head self-attention allows ViT to jointly attend to features from different information subspaces at different positions. Given no explicit guidance, however, multiple self-attentions may extract similar relations. To address this, the MSAD is proposed to randomly drop one self-attention module. As a result, models are forced to learn rich relations among diverse local patches. Our proposed TransFER model outperforms the state-of-the-art methods on several FER benchmarks, showing its effectiveness and usefulness.

READ FULL TEXT

page 1

page 8

research
01/31/2022

BOAT: Bilateral Local Attention Vision Transformer

Vision Transformers achieved outstanding performance in many computer vi...
research
11/14/2021

Local Multi-Head Channel Self-Attention for Facial Expression Recognition

Since the Transformer architecture was introduced in 2017 there has been...
research
04/11/2023

Life Regression based Patch Slimming for Vision Transformers

Vision transformers have achieved remarkable success in computer vision ...
research
10/23/2022

Attention Based Relation Network for Facial Action Units Recognition

Facial action unit (AU) recognition is essential to facial expression an...
research
08/09/2021

TransForensics: Image Forgery Localization with Dense Self-Attention

Nowadays advanced image editing tools and technical skills produce tampe...
research
07/23/2020

Spatially Aware Multimodal Transformers for TextVQA

Textual cues are essential for everyday tasks like buying groceries and ...
research
05/13/2022

Local Attention Graph-based Transformer for Multi-target Genetic Alteration Prediction

Classical multiple instance learning (MIL) methods are often based on th...

Please sign up or login with your details

Forgot password? Click here to reset