Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

05/31/2018
by   Avishek Joey Bose, et al.
0

Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them. While many different adversarial attack strategies have been proposed on image classification models, object detection pipelines have been much harder to break. In this paper, we propose a novel strategy to craft adversarial examples by solving a constrained optimization problem using an adversarial generator network. Our approach is fast and scalable, requiring only a forward pass through our trained generator network to craft an adversarial sample. Unlike in many attack strategies, we show that the same trained generator is capable of attacking new images without explicitly optimizing on them. We evaluate our attack on a trained Faster R-CNN face detector on the cropped 300-W face dataset where we manage to reduce the number of detected faces to 0.5% of all originally detected faces. In a different experiment, also on 300-W, we demonstrate the robustness of our attack to a JPEG compression based defense typical JPEG compression level of 75% reduces the effectiveness of our attack from only 0.5% of detected faces to a modest 5.0%.

READ FULL TEXT
research
07/28/2021

Detecting AutoAttack Perturbations in the Frequency Domain

Recently, adversarial attacks on image classification networks by the Au...
research
06/17/2020

Disrupting Deepfakes with an Adversarial Attack that Survives Training

The rapid progress in generative models and autoencoders has given rise ...
research
04/22/2023

Detecting Adversarial Faces Using Only Real Face Self-Perturbations

Adversarial attacks aim to disturb the functionality of a target system ...
research
02/14/2021

Perceptually Constrained Adversarial Attacks

Motivated by previous observations that the usually applied L_p norms (p...
research
05/30/2021

DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows

Despite much recent work, detecting out-of-distribution (OOD) inputs and...
research
02/07/2021

Adversarial Imaging Pipelines

Adversarial attacks play an essential role in understanding deep neural ...
research
04/27/2020

Printing and Scanning Attack for Image Counter Forensics

Examining the authenticity of images has become increasingly important a...

Please sign up or login with your details

Forgot password? Click here to reset