UPSET and ANGRI : Breaking High Performance Image Classifiers

07/04/2017
by   Sayantan Sarkar, et al.
0

In this paper, targeted fooling of high performance image classifiers is achieved by developing two novel attack methods. The first method generates universal perturbations for target classes and the second generates image specific perturbations. Extensive experiments are conducted on MNIST and CIFAR10 datasets to provide insights about the proposed algorithms and show their effectiveness.

READ FULL TEXT

page 5

page 6

page 7

research
06/09/2020

GAP++: Learning to generate target-conditioned adversarial examples

Adversarial examples are perturbed inputs which can cause a serious thre...
research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
10/07/2020

CD-UAP: Class Discriminative Universal Adversarial Perturbation

A single universal adversarial perturbation (UAP) can be added to all na...
research
03/10/2022

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Current adversarial attack research reveals the vulnerability of learnin...
research
10/27/2019

EdgeFool: An Adversarial Image Enhancement Filter

Adversarial examples are intentionally perturbed images that mislead cla...
research
12/27/2017

Adversarial Patch

We present a method to create universal, robust, targeted adversarial im...
research
10/25/2019

MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation

This work tackles the Pixel Privacy task put forth by MediaEval 2019. Ou...

Please sign up or login with your details

Forgot password? Click here to reset