ADef: an Iterative Algorithm to Construct Adversarial Deformations

04/20/2018
by   Rima Alaifari, et al.
0

While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with a convolutional neural network and on ImageNet with Inception-v3 and ResNet-101.

READ FULL TEXT

page 7

page 8

research
07/05/2023

Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact

This chapter introduces the concept of adversarial attacks on image clas...
research
03/02/2018

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...
research
06/18/2022

Adversarial Robustness is at Odds with Lazy Training

Recent works show that random neural networks are vulnerable against adv...
research
06/06/2018

Adversarial Attack on Graph Structured Data

Deep learning on graph structures has shown exciting results in various ...
research
02/04/2019

SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples?

Recently, many adversarial examples have emerged for Deep Neural Network...
research
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
research
08/13/2020

Semantically Adversarial Learnable Filters

We present the first adversarial framework that crafts perturbations tha...

Please sign up or login with your details

Forgot password? Click here to reset