Fast Geometrically-Perturbed Adversarial Faces

09/24/2018
by   Ali Dabouei, et al.
6

The state-of-the-art performance of deep learning algorithms has led to a considerable increase in the utilization of machine learning in security-sensitive and critical applications. However, it has recently been shown that a small and carefully crafted perturbation in the input space can completely fool a deep model. In this study, we explore the extent to which face recognition systems are vulnerable to geometrically-perturbed adversarial faces. We propose a fast landmark manipulation method for generating adversarial faces, which is approximately 200 times faster than the previous geometric attacks and obtains 99.86 recognition models. To further force the generated samples to be natural, we introduce a second attack constrained on the semantic structure of the face which has the half speed of the first attack with the success rate of 99.96 Both attacks are extremely robust against the state-of-the-art defense methods with the success rate of equal or greater than 53.59

READ FULL TEXT

page 1

page 4

page 6

page 7

research
01/07/2020

Robust Facial Landmark Detection via Aggregation on Geometrically Manipulated Faces

In this work, we present a practical approach to the problem of facial l...
research
09/14/2021

Dodging Attack Using Carefully Crafted Natural Makeup

Deep learning face recognition models are used by state-of-the-art surve...
research
02/24/2021

Robust SleepNets

State-of-the-art convolutional neural networks excel in machine learning...
research
07/25/2023

Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation

Although face recognition starts to play an important role in our daily ...
research
10/21/2021

Convex Hull Escape Perturbation at Embedding Space and Spherical Bins Coloring for 3D Face De-identification

This paper proposes a Convex Hull Escape Perturbation (CHEP) method at E...
research
04/22/2023

Detecting Adversarial Faces Using Only Real Face Self-Perturbations

Adversarial attacks aim to disturb the functionality of a target system ...
research
06/08/2021

Simulated Adversarial Testing of Face Recognition Models

Most machine learning models are validated and tested on fixed datasets....

Please sign up or login with your details

Forgot password? Click here to reset