DeepAI AI Chat
Log In Sign Up

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

12/05/2019
by   Malhar Jere, et al.
University of California, San Diego
SRI International
0

Recent research has shown that Deep Neural Networks (DNNs) for image classification are vulnerable to adversarial attacks. However, most works on adversarial samples utilize sub-perceptual noise that, while invisible or slightly visible to humans, often covers the entire image. Additionally, most of these attacks often require knowledge of the neural network architecture and its parameters, and the ability to calculate the gradients of the parameters with respect to the inputs. In this work, we show that it is possible to attack neural networks in a highly restricted threat setting, where attackers have no knowledge of the neural network (i.e., in a black-box setting) and can only modify highly localized adversarial noise in the form of randomly chosen straight lines or scratches. Our Adversarial Scratches attack method covers only 1-2 Adaptation Evolutionary Strategy, a purely black-box method that does not require knowledge of the neural network architecture and its gradients. Against ImageNet models, Adversarial Scratches requires 3 times fewer queries than GenAttack (without any optimizations) and 73 times fewer queries than ZOO, both prior state-of-the-art black-box attacks. We successfully deceive state-of-the-art Inception-v3, ResNet-50 and VGG-19 models trained on ImageNet with deceiving rates of 75.8 than several state-of-the-art black-box attacks, while modifying less than 2 of the image pixels. Additionally, we provide a new threat scenario for neural networks, demonstrate a new attack surface that can be used to perform adversarial attacks, and discuss its potential implications.

READ FULL TEXT

page 1

page 2

page 9

04/30/2021

Black-box adversarial attacks using Evolution Strategies

In the last decade, deep neural networks have proven to be very powerful...
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
02/04/2022

Pixle: a fast and effective black-box attack based on rearranging pixels

Recent research has found that neural networks are vulnerable to several...
02/20/2019

Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

Deep neural networks provide unprecedented performance in all image clas...
10/21/2019

Recovering Localized Adversarial Attacks

Deep convolutional neural networks have achieved great successes over re...
10/06/2020

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Recent increases in the computational demands of deep neural networks (D...
01/08/2018

LaVAN: Localized and Visible Adversarial Noise

Most works on adversarial examples for deep-learning based image classif...