A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems

04/16/2023
by   Jiahao Xie, et al.
0

The physical attack has been regarded as a kind of threat against real-world computer vision systems. Still, many existing defense methods are only useful for small perturbations attacks and can't detect physical attacks effectively. In this paper, we propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS). Different from mainstream defense methods which focus on building complex deep neural networks (DNN) to achieve high recognition rate on attacks, we introduce a patch based defense strategy to a standard DNN aiming to obtain robust detection models. Extensive experimental results on the employed datasets show the superiority of the proposed defense method on detecting white-box attacks and adaptive attacks which attack both FRS and the defense method. Additionally, due to the simpleness yet robustness of our method, it can be easily applied to the real world face recognition system and extended to other defense methods to boost the detection performance.

READ FULL TEXT

page 1

page 4

page 7

research
04/15/2021

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

Deep neural networks (DNN) have been widely deployed in various practica...
research
06/15/2023

DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World

Adversarial attacks in the physical world, particularly patch attacks, p...
research
04/05/2021

Unified Detection of Digital and Physical Face Attacks

State-of-the-art defense mechanisms against face attacks achieve near pe...
research
06/28/2021

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...
research
08/13/2022

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

Backdoor attacks have been shown to be a serious security threat against...
research
02/22/2018

Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks

Deep neural network (DNN) architecture based models have high expressive...
research
04/22/2023

Detecting Adversarial Faces Using Only Real Face Self-Perturbations

Adversarial attacks aim to disturb the functionality of a target system ...

Please sign up or login with your details

Forgot password? Click here to reset