Meaningful Adversarial Stickers for Face Recognition in Physical World

04/14/2021
by   Ying Guo, et al.
0

Face recognition (FR) systems have been widely applied in safety-critical fields with the introduction of deep learning. However, the existence of adversarial examples brings potential security risks to FR systems. To identify their vulnerability and help improve their robustness, in this paper, we propose Meaningful Adversarial Stickers, a physically feasible and easily implemented attack method by using meaningful real stickers existing in our life, where the attackers manipulate the pasting parameters of stickers on the face, instead of designing perturbation patterns and then printing them like most existing works. We conduct attacks in the black-box setting with limited information which is more challenging and practical. To effectively solve the pasting position, rotation angle, and other parameters of the stickers, we design Region based Heuristic Differential Algorithm, which utilizes the inbreeding strategy based on regional aggregation of effective solutions and the adaptive adjustment strategy of evaluation criteria. Extensive experiments are conducted on two public datasets including LFW and CelebA with respective to three representative FR models like FaceNet, SphereFace, and CosFace, achieving attack success rates of 81.78 only hundreds of queries. The results in the physical world confirm the effectiveness of our method in complex physical conditions. When continuously changing the face posture of testers, the method can still perform successful attacks up to 98.46

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

research
04/09/2019

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

Face recognition has obtained remarkable progress in recent years due to...
research
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
research
07/04/2022

RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries

Recent successful adversarial attacks on face recognition show that, des...
research
12/26/2022

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

Adversarial patch is an important form of real-world adversarial attack ...
research
01/11/2022

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

The majority of adversarial attack techniques perform well against deep ...
research
02/10/2021

Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis

Deep neural networks (DNNs) have been applied in a wide range of applica...
research
08/02/2023

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time

Automatic speech recognition (ASR) systems have been shown to be vulnera...

Please sign up or login with your details

Forgot password? Click here to reset