Stochastic sparse adversarial attacks

11/24/2020
by   Hatem Hajri, et al.
13

Adversarial attacks of neural network classifiers (NNC) and the use of random noises in these methods have stimulated a large number of works in recent years. However, despite all the previous investigations, existing approaches that rely on random noises to fool NNC have fallen far short of the-state-of-the-art adversarial methods performances. In this paper, we fill this gap by introducing stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of NNC. SSAA offer new examples of sparse (or L_0) attacks for which only few methods have been proposed previously. These attacks are devised by exploiting a small-time expansion idea widely used for Markov processes. Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods. For instance, in the untargeted case, our method called voting folded Gaussian attack (VFGA) scales efficiently to ImageNet and achieves a significantly lower L_0 score than SparseFool (up to 1/14 lower) while being faster. In the targeted setting, VFGA achives appealing results on ImageNet and is significantly much faster than Carlini-Wagner L_0 attack.

READ FULL TEXT

page 13

page 17

research
04/02/2019

Curls & Whey: Boosting Black-Box Adversarial Attacks

Image classifiers based on deep neural networks suffer from harassment c...
research
12/06/2021

ML Attack Models: Adversarial Attacks and Data Poisoning Attacks

Many state-of-the-art ML models have outperformed humans in various task...
research
03/25/2019

The LogBarrier adversarial attack: making effective use of decision boundary information

Adversarial attacks for image classification are small perturbations to ...
research
07/12/2020

Probabilistic Jacobian-based Saliency Maps Attacks

Machine learning models have achieved spectacular performances in variou...
research
03/05/2020

Detection and Recovery of Adversarial Attacks with Injected Attractors

Many machine learning adversarial attacks find adversarial samples of a ...
research
06/11/2021

Adversarial purification with Score-based generative models

While adversarial training is considered as a standard defense method ag...
research
05/30/2019

Identifying Classes Susceptible to Adversarial Attacks

Despite numerous attempts to defend deep learning based image classifier...

Please sign up or login with your details

Forgot password? Click here to reset