Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks

by   Gaurav Goswami, et al.

Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks inspired by commonly observed distortions in the real world that are well handled by shallow learning methods along with learning based adversaries; (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, including OpenFace and VGG-Face, and two publicly available databases (MEDS and PaSC) demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. The proposed method is also compared with existing detection algorithms and the results show that it is able to detect the attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present several effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.


page 1

page 3


Delving into the Adversarial Robustness on Face Recognition

Face recognition has recently made substantial progress and achieved hig...

Towards Transferable Adversarial Attack against Deep Face Recognition

Face recognition has achieved great success in the last five years due t...

Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

Deep Neural Networks (DNNs) lack robustness against imperceptible pertur...

Detection of Face Recognition Adversarial Attacks

Deep Learning methods have become state-of-the-art for solving tasks suc...

A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems

The physical attack has been regarded as a kind of threat against real-w...

Open Source Face Recognition Performance Evaluation Package

Biometrics-related research has been accelerated significantly by deep l...

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...

Please sign up or login with your details

Forgot password? Click here to reset