Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization

09/18/2023
by   Huihui Gong, et al.
0

Deep neural networks (DNNs) have achieved state-of-the-art performance on face recognition (FR) tasks in the last decade. In real scenarios, the deployment of DNNs requires taking various face accessories into consideration, like glasses, hats, and masks. In the COVID-19 pandemic era, wearing face masks is one of the most effective ways to defend against the novel coronavirus. However, DNNs are known to be vulnerable to adversarial examples with a small but elaborated perturbation. Thus, a facial mask with adversarial perturbations may pose a great threat to the widely used deep learning-based FR models. In this paper, we consider a challenging adversarial setting: targeted attack against FR models. We propose a new stealthy physical masked FR attack via adversarial style optimization. Specifically, we train an adversarial style mask generator that hides adversarial perturbations inside style masks. Moreover, to ameliorate the phenomenon of sub-optimization with one fixed style, we propose to discover the optimal style given a target through style optimization in a continuous relaxation manner. We simultaneously optimize the generator and the style selection for generating strong and stealthy adversarial style masks. We evaluated the effectiveness and transferability of our proposed method via extensive white-box and black-box digital experiments. Furthermore, we also conducted physical attack experiments against local FR models and online platforms.

READ FULL TEXT

page 1

page 5

page 8

page 9

page 11

research
06/25/2022

RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...
research
11/21/2021

Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Deep learning-based facial recognition (FR) models have demonstrated sta...
research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
05/09/2019

Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems

Thanks to recent advances in Deep Neural Networks (DNNs), face recogniti...
research
06/29/2021

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

Face recognition is greatly improved by deep convolutional neural networ...
research
03/26/2021

MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes

Deepfakes raised serious concerns on the authenticity of visual contents...
research
06/13/2023

I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

Modern image-to-text systems typically adopt the encoder-decoder framewo...

Please sign up or login with your details

Forgot password? Click here to reset