Exploiting vulnerabilities of deep neural networks for privacy protection

07/19/2020
by   Ricardo Sanchez-Matilla, et al.
0

Adversarial perturbations can be added to images to protect their content from unwanted inferences. These perturbations may, however, be ineffective against classifiers that were not seen during the generation of the perturbation, or against defenses based on re-quantization, median filtering or JPEG compression. To address these limitations, we present an adversarial attack that is specifically designed to protect visual content against unseen classifiers and known defenses. We craft perturbations using an iterative process that is based on the Fast Gradient Signed Method and that randomly selects a classifier and a defense, at each iteration. This randomization prevents an undesirable overfitting to a specific classifier or defense. We validate the proposed attack in both targeted and untargeted settings on the private classes of the Places365-Standard dataset. Using ResNet18, ResNet50, AlexNet and DenseNet161 as classifiers, the performance of the proposed attack exceeds that of eleven state-of-the-art attacks. The implementation is available at https://github.com/smartcameras/RP-FGSM/.

READ FULL TEXT

page 2

page 7

page 8

page 9

page 10

page 12

research
11/25/2019

ColorFool: Semantic Adversarial Colorization

Adversarial attacks that generate small L_p-norm perturbations to mislea...
research
05/17/2018

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

In recent years, deep neural network approaches have been widely adopted...
research
05/30/2023

What Can We Learn from Unlearnable Datasets?

In an era of widespread web scraping, unlearnable dataset methods have t...
research
03/21/2018

Adversarial Defense based on Structure-to-Signal Autoencoders

Adversarial attack methods have demonstrated the fragility of deep neura...
research
10/25/2019

MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation

This work tackles the Pixel Privacy task put forth by MediaEval 2019. Ou...
research
05/14/2019

Robustification of deep net classifiers by key based diversified aggregation with pre-filtering

In this paper, we address a problem of machine learning system vulnerabi...
research
07/07/2020

VPS: Excavating High-Level C++ Constructs from Low-Level Binaries to Protect Dynamic Dispatching

Polymorphism and inheritance make C++ suitable for writing complex softw...

Please sign up or login with your details

Forgot password? Click here to reset