Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

by   Zihao Xiao, et al.

Face recognition is greatly improved by deep convolutional neural networks (CNNs). Recently, these face recognition models have been used for identity authentication in security sensitive applications. However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models. In this paper, we evaluate the robustness of face recognition models using adversarial patches based on transferability, where the attacker has limited accessibility to the target models. First, we extend the existing transfer-based attack techniques to generate transferable adversarial patches. However, we observe that the transferability is sensitive to initialization and degrades when the perturbation magnitude is large, indicating the overfitting to the substitute models. Second, we propose to regularize the adversarial patches on the low dimensional data manifold. The manifold is represented by generative models pre-trained on legitimate human face images. Using face-like features as adversarial perturbations through optimization on the manifold, we show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability. Extensive digital world experiments are conducted to demonstrate the superiority of the proposed method in the black-box setting. We apply the proposed method in the physical world as well.


page 1

page 5

page 13

page 14

page 17

page 18


Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

Face recognition has obtained remarkable progress in recent years due to...

Delving into the Adversarial Robustness on Face Recognition

Face recognition has recently made substantial progress and achieved hig...

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Face recognition is a prevailing authentication solution in numerous bio...

Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization

Deep neural networks (DNNs) have achieved state-of-the-art performance o...

JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

It has been observed that the unauthorized use of face recognition syste...

On adversarial patches: real-world attack on ArcFace-100 face recognition system

Recent works showed the vulnerability of image classifiers to adversaria...

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...

Please sign up or login with your details

Forgot password? Click here to reset