Clipping free attacks against artificial neural networks

03/26/2018
by   Boussad Addad, et al.
0

During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fact and different approaches have been proposed to generate attacks while adding a limited perturbation to the original data. The most robust known method so far is the so called C&W attack [1]. Nonetheless, a countermeasure known as feature squeezing coupled with ensemble defense showed that most of these attacks can be destroyed [6]. In this paper, we present a new method we call Centered Initial Attack (CIA) whose advantage is twofold : first, it insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process that degrades the quality of attacks. Second, it is robust against recently introduced defenses such as feature squeezing, JPEG encoding and even against a voting ensemble of defenses. While its application is not limited to images, we illustrate this using five of the current best classifiers on ImageNet dataset among which two are adversarialy retrained on purpose to be robust against attacks. With a fixed maximum perturbation of only 1.5 (targeted) fool the voting ensemble defense and nearly 100 perturbation is only 6 CIA attacks, the last section of the paper gives some guidelines to limit their impact.

READ FULL TEXT
research
06/30/2020

Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection

Malware remains a big threat to cyber security, calling for machine lear...
research
06/07/2019

Efficient Project Gradient Descent for Ensemble Adversarial Attack

Recent advances show that deep neural networks are not robust to deliber...
research
09/10/2019

Effectiveness of Adversarial Examples and Defenses for Malware Classification

Artificial neural networks have been successfully used for many differen...
research
01/28/2019

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

Adversarial examples are known to mislead deep learning models to incorr...
research
12/05/2022

Multiple Perturbation Attack: Attack Pixelwise Under Different ℓ_p-norms For Better Adversarial Performance

Adversarial machine learning has been both a major concern and a hot top...
research
10/06/2020

Downscaling Attack and Defense: Turning What You See Back Into What You Get

The resizing of images, which is typically a required part of preprocess...
research
02/01/2019

Adaptive Gradient Refinement for Adversarial Perturbation Generation

Deep Neural Networks have achieved remarkable success in computer vision...

Please sign up or login with your details

Forgot password? Click here to reset