Attacking Convolutional Neural Network using Differential Evolution

04/19/2018
by   Jiawei Su, et al.
0

The output of Convolutional Neural Networks (CNN) has been shown to be discontinuous which can make the CNN image classifier vulnerable to small well-tuned artificial perturbations. That is, images modified by adding such perturbations(i.e. adversarial perturbations) that make little difference to human eyes, can completely alter the CNN classification results. In this paper, we propose a practical attack using differential evolution(DE) for generating effective adversarial perturbations. We comprehensively evaluate the effectiveness of different types of DEs for conducting the attack on different network structures. The proposed method is a black-box attack which only requires the miracle feedback of the target CNN systems. The results show that under strict constraints which simultaneously control the number of pixels changed and overall perturbation strength, attacking can achieve 72.29 and 61.28 confidence on average, on three common types of CNNs. The attack only requires modifying 5 pixels with 20.44, 14.76 and 22.98 pixel values distortion. Thus, the result shows that the current DNNs are also vulnerable to such simpler black-box attacks even under very limited attack conditions.

READ FULL TEXT

page 1

page 2

page 10

research
10/24/2017

One pixel attack for fooling deep neural networks

Recent research has revealed that the output of Deep Neural Networks (DN...
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
10/14/2020

GreedyFool: An Imperceptible Black-box Adversarial Example Attack against Neural Networks

Deep neural networks (DNNs) are inherently vulnerable to well-designed i...
research
04/24/2020

One Sparse Perturbation to Fool them All, almost Always!

Constructing adversarial perturbations for deep neural networks is an im...
research
03/09/2021

Stabilized Medical Image Attacks

Convolutional Neural Networks (CNNs) have advanced existing medical syst...
research
04/05/2023

A Certified Radius-Guided Attack Framework to Image Segmentation Models

Image segmentation is an important problem in many safety-critical appli...
research
04/20/2022

Adversarial Scratches: Deployable Attacks to CNN Classifiers

A growing body of work has shown that deep neural networks are susceptib...

Please sign up or login with your details

Forgot password? Click here to reset