Using Self-Supervised Co-Training to Improve Facial Representation

05/13/2021
by   Mahdi Pourmirzaei, et al.
0

In this paper, at first, the impact of ImageNet pre-training on Facial Expression Recognition (FER) was tested under different augmentation levels. It could be seen from the results that training from scratch could reach better performance compared to ImageNet fine-tuning at stronger augmentation levels. After that, a framework was proposed for standard Supervised Learning (SL), called Hybrid Learning (HL) which used Self-Supervised co-training with SL in Multi-Task Learning (MTL) manner. Leveraging Self-Supervised Learning (SSL) could gain additional information from input data like spatial information from faces which helped the main SL task. It is been investigated how this method could be used for FER problems with self-supervised pre-tasks such as Jigsaw puzzling and in-painting. The supervised head (SH) was helped by these two methods to lower the error rate under different augmentations and low data regime in the same training settings. The state-of-the-art was reached on AffectNet via two completely different HL methods, without utilizing additional datasets. Moreover, HL's effect was shown on two different facial-related problem, head poses estimation and gender recognition, which concluded to reduce in error rate by up to 9 methods prevented the model from reaching overfitting.

READ FULL TEXT

page 5

page 9

page 10

research
08/10/2021

How Self-Supervised Learning Can be Used for Fine-Grained Head Pose Estimation?

Recent progress of Self-Supervised Learning (SSL) demonstrates the capab...
research
07/05/2023

MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition

Dynamic facial expression recognition (DFER) is essential to the develop...
research
11/04/2022

Biased Self-supervised learning for ASR

Self-supervised learning via masked prediction pre-training (MPPT) has s...
research
07/16/2021

Self-Supervised Learning Framework for Remote Heart Rate Estimation Using Spatiotemporal Augmentation

Recent supervised deep learning methods have shown that heart rate can b...
research
08/06/2021

Spatiotemporal Contrastive Learning of Facial Expressions in Videos

We propose a self-supervised contrastive learning approach for facial ex...
research
08/25/2017

Multi-task Self-Supervised Visual Learning

We investigate methods for combining multiple self-supervised tasks--i.e...
research
10/11/2021

SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition

Hand gesture serves as a critical role in sign language. Current deep-le...

Please sign up or login with your details

Forgot password? Click here to reset