Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks

02/22/2018
by   Gaurav Goswami, et al.
0

Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks inspired by commonly observed distortions in the real world that are well handled by shallow learning methods along with learning based adversaries; (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, including OpenFace and VGG-Face, and two publicly available databases (MEDS and PaSC) demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. The proposed method is also compared with existing detection algorithms and the results show that it is able to detect the attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present several effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.

READ FULL TEXT

page 1

page 3

research
07/08/2020

Delving into the Adversarial Robustness on Face Recognition

Face recognition has recently made substantial progress and achieved hig...
research
04/13/2020

Towards Transferable Adversarial Attack against Deep Face Recognition

Face recognition has achieved great success in the last five years due t...
research
02/10/2022

Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

Deep Neural Networks (DNNs) lack robustness against imperceptible pertur...
research
12/05/2019

Detection of Face Recognition Adversarial Attacks

Deep Learning methods have become state-of-the-art for solving tasks suc...
research
04/16/2023

A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems

The physical attack has been regarded as a kind of threat against real-w...
research
01/27/2019

Open Source Face Recognition Performance Evaluation Package

Biometrics-related research has been accelerated significantly by deep l...
research
12/31/2017

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...

Please sign up or login with your details

Forgot password? Click here to reset