Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

10/03/2019
by   He Zhao, et al.
22

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to input images. Despite the success of existing approaches, the way to generate realistic adversarial images with small perturbations remains a challenging problem. In this paper, we aim to address this problem by proposing a novel adversarial method, which generates adversarial examples by imposing not only perturbations but also spatial distortions on input images, including scaling, rotation, shear, and translation. As humans are less susceptible to small spatial distortions, the proposed approach can produce visually more realistic attacks with smaller perturbations, able to deceive classifiers without affecting human predictions. We learn our method by amortized techniques with neural networks and generate adversarial examples efficiently by a forward pass of the networks. Extensive experiments on attacking different types of non-robustified classifiers and robust classifiers with defence show that our method has state-of-the-art performance in comparison with advanced attack parallels.

READ FULL TEXT

page 3

page 10

page 11

research
10/13/2020

Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images

Deep neural network image classifiers are reported to be susceptible to ...
research
04/20/2022

Adversarial Scratches: Deployable Attacks to CNN Classifiers

A growing body of work has shown that deep neural networks are susceptib...
research
07/04/2017

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...
research
12/07/2021

Image classifiers can not be made robust to small perturbations

The sensitivity of image classifiers to small perturbations in the input...
research
03/10/2022

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Current adversarial attack research reveals the vulnerability of learnin...
research
11/15/2019

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
01/01/2020

Erase and Restore: Simple, Accurate and Resilient Detection of L_2 Adversarial Examples

By adding carefully crafted perturbations to input images, adversarial e...

Please sign up or login with your details

Forgot password? Click here to reset