Scratch that! An Evolution-based Adversarial Attack against Neural Networks

12/05/2019
by   Malhar Jere, et al.
0

Recent research has shown that Deep Neural Networks (DNNs) for image classification are vulnerable to adversarial attacks. However, most works on adversarial samples utilize sub-perceptual noise that, while invisible or slightly visible to humans, often covers the entire image. Additionally, most of these attacks often require knowledge of the neural network architecture and its parameters, and the ability to calculate the gradients of the parameters with respect to the inputs. In this work, we show that it is possible to attack neural networks in a highly restricted threat setting, where attackers have no knowledge of the neural network (i.e., in a black-box setting) and can only modify highly localized adversarial noise in the form of randomly chosen straight lines or scratches. Our Adversarial Scratches attack method covers only 1-2 Adaptation Evolutionary Strategy, a purely black-box method that does not require knowledge of the neural network architecture and its gradients. Against ImageNet models, Adversarial Scratches requires 3 times fewer queries than GenAttack (without any optimizations) and 73 times fewer queries than ZOO, both prior state-of-the-art black-box attacks. We successfully deceive state-of-the-art Inception-v3, ResNet-50 and VGG-19 models trained on ImageNet with deceiving rates of 75.8 than several state-of-the-art black-box attacks, while modifying less than 2 of the image pixels. Additionally, we provide a new threat scenario for neural networks, demonstrate a new attack surface that can be used to perform adversarial attacks, and discuss its potential implications.

READ FULL TEXT

page 1

page 2

page 9

research
04/30/2021

Black-box adversarial attacks using Evolution Strategies

In the last decade, deep neural networks have proven to be very powerful...
research
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
research
06/24/2020

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
02/20/2019

Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

Deep neural networks provide unprecedented performance in all image clas...
research
10/21/2019

Recovering Localized Adversarial Attacks

Deep convolutional neural networks have achieved great successes over re...
research
10/06/2020

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Recent increases in the computational demands of deep neural networks (D...
research
01/08/2018

LaVAN: Localized and Visible Adversarial Noise

Most works on adversarial examples for deep-learning based image classif...

Please sign up or login with your details

Forgot password? Click here to reset