Semantic Adversarial Attacks on Face Recognition through Significant Attributes

01/28/2023
by   Yasmeen M. Khedr, et al.
0

Face recognition is known to be vulnerable to adversarial face images. Existing works craft face adversarial images by indiscriminately changing a single attribute without being aware of the intrinsic attributes of the images. To this end, we propose a new Semantic Adversarial Attack called SAA-StarGAN that tampers with the significant facial attributes for each image. We predict the most significant attributes by applying the cosine similarity or probability score. The probability score method is based on training a Face Verification model for an attribute prediction task to obtain a class probability score for each attribute. The prediction process will help craft adversarial face images more easily and efficiently, as well as improve the adversarial transferability. Then, we change the most significant facial attributes, with either one or more of the facial attributes for impersonation and dodging attacks in white-box and black-box settings. Experimental results show that our method could generate diverse and realistic adversarial face images meanwhile avoid affecting human perception of the face recognition. SAA-StarGAN achieves an 80.5 outperforming existing methods by 35.5 Concerning the black-box setting, SAA-StarGAN achieves high attack success rates on various models. The experiments confirm that predicting the most important attributes significantly affects the success of adversarial attacks in both white-box and black-box settings and could enhance the transferability of the crafted adversarial examples.

READ FULL TEXT

page 1

page 4

page 7

page 9

page 11

page 13

research
06/25/2022

RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...
research
04/26/2022

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

DeepFake face swapping presents a significant threat to online security ...
research
10/13/2022

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

Deep learning models have shown their vulnerability when dealing with ad...
research
12/09/2019

Amora: Black-box Adversarial Morphing Attack

Nowadays, digital facial content manipulation has become ubiquitous and ...
research
01/11/2022

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

The majority of adversarial attack techniques perform well against deep ...
research
07/19/2021

Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition

The modern open internet contains billions of public images of human fac...
research
03/16/2023

Image Classifiers Leak Sensitive Attributes About Their Classes

Neural network-based image classifiers are powerful tools for computer v...

Please sign up or login with your details

Forgot password? Click here to reset