Sparse and Imperceivable Adversarial Attacks

09/11/2019
by   Francesco Croce, et al.
10

Neural networks have been proven to be vulnerable to a variety of adversarial attacks. From a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected. We propose a new black-box technique to craft adversarial examples aiming at minimizing l_0-distance to the original image. Extensive experiments show that our attack is better or competitive to the state of the art. Moreover, we can integrate additional bounds on the componentwise perturbation. Allowing pixels to change only in region of high variation and avoiding changes along axis-aligned edges makes our adversarial examples almost non-perceivable. Moreover, we adapt the Projected Gradient Descent attack to the l_0-norm integrating componentwise constraints. This allows us to do adversarial training to enhance the robustness of classifiers against sparse and imperceivable adversarial manipulations.

READ FULL TEXT

page 1

page 8

page 13

page 14

page 15

page 16

research
10/08/2018

Efficient Two-Step Adversarial Defense for Deep Neural Networks

In recent years, deep neural networks have demonstrated outstanding perf...
research
03/01/2021

Mind the box: l_1-APGD for sparse adversarial attacks on image classifiers

We show that when taking into account also the image domain [0,1]^d, est...
research
03/18/2022

AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack

Deep neural networks (DNNs) have been proven to be vulnerable to adversa...
research
12/22/2020

On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability

Deep neural networks are easily fooled by small perturbations known as a...
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
10/14/2021

Adversarial examples by perturbing high-level features in intermediate decoder layers

We propose a novel method for creating adversarial examples. Instead of ...
research
06/01/2020

Adversarial Attacks on Classifiers for Eye-based User Modelling

An ever-growing body of work has demonstrated the rich information conte...

Please sign up or login with your details

Forgot password? Click here to reset