Adversarial Training for Face Recognition Systems using Contrastive Adversarial Learning and Triplet Loss Fine-tuning

10/09/2021
by   Nazmul Karim, et al.
0

Though much work has been done in the domain of improving the adversarial robustness of facial recognition systems, a surprisingly small percentage of it has focused on self-supervised approaches. In this work, we present an approach that combines Ad-versarial Pre-Training with Triplet Loss AdversarialFine-Tuning. We compare our methods with the pre-trained ResNet50 model that forms the backbone of FaceNet, finetuned on our CelebA dataset. Through comparing adversarial robustness achieved without adversarial training, with triplet loss adversarial training, and our contrastive pre-training combined with triplet loss adversarial fine-tuning, we find that our method achieves comparable results with far fewer epochs re-quired during fine-tuning. This seems promising, increasing the training time for fine-tuning should yield even better results. In addition to this, a modified semi-supervised experiment was conducted, which demonstrated the improvement of contrastive adversarial training with the introduction of small amounts of labels.

READ FULL TEXT

page 2

page 6

page 7

research
06/18/2020

Recovering Petaflops in Contrastive Semi-Supervised Learning of Visual Representations

We investigate a strategy for improving the computational efficiency of ...
research
10/26/2020

Robust Pre-Training by Adversarial Contrastive Learning

Recent work has shown that, when integrated with adversarial training, s...
research
07/11/2022

RUSH: Robust Contrastive Learning via Randomized Smoothing

Recently, adversarial training has been incorporated in self-supervised ...
research
11/21/2022

CLAWSAT: Towards Both Robust and Accurate Code Models

We integrate contrastive learning (CL) with adversarial learning to co-o...
research
04/19/2021

A Framework using Contrastive Learning for Classification with Noisy Labels

We propose a framework using contrastive learning as a pre-training task...
research
01/24/2019

In Defense of the Triplet Loss for Visual Recognition

We employ triplet loss as a space embedding regularizer to boost classif...
research
12/24/2020

Adversarial Momentum-Contrastive Pre-Training

Deep neural networks are vulnerable to semantic invariant corruptions an...

Please sign up or login with your details

Forgot password? Click here to reset