Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

10/13/2022
by   Shuai Jia, et al.
0

Deep learning models have shown their vulnerability when dealing with adversarial attacks. Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and rarely exploit semantic clues. For face recognition attacks, existing methods typically generate the l_p-norm perturbations on pixels, however, resulting in low attack transferability and high vulnerability to denoising defense models. In this work, instead of performing perturbations on the low-level pixels, we propose to generate attacks through perturbing on the high-level semantics to improve attack transferability. Specifically, a unified flexible framework, Adversarial Attributes (Adv-Attribute), is designed to generate inconspicuous and transferable attacks on face recognition, which crafts the adversarial noise and adds it into different attributes based on the guidance of the difference in face recognition features from the target. Moreover, the importance-aware attribute selection and the multi-objective optimization strategy are introduced to further ensure the balance of stealthiness and attacking strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates while maintaining better visual effects against recent attack methods.

READ FULL TEXT

page 2

page 4

page 9

research
01/28/2023

Semantic Adversarial Attacks on Face Recognition through Significant Attributes

Face recognition is known to be vulnerable to adversarial face images. E...
research
04/13/2020

Towards Transferable Adversarial Attack against Deep Face Recognition

Face recognition has achieved great success in the last five years due t...
research
09/04/2023

Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration

Adversarial face examples possess two critical properties: Visual Qualit...
research
05/14/2023

Diffusion Models for Imperceptible and Transferable Adversarial Attack

Many existing adversarial attacks generate L_p-norm perturbations on ima...
research
07/04/2023

LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via Latent Ensemble Attack

Deepfakes, malicious visual contents created by generative models, pose ...
research
06/19/2022

JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

It has been observed that the unauthorized use of face recognition syste...
research
02/10/2022

Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

Deep Neural Networks (DNNs) lack robustness against imperceptible pertur...

Please sign up or login with your details

Forgot password? Click here to reset