Sampled Nonlocal Gradients for Stronger Adversarial Attacks

11/05/2020
by   Leo Schwinn, et al.
0

The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research. The evaluation of new defense mechanisms for these so-called adversarial attacks has proven to be challenging. Although several sophisticated defense mechanisms were introduced, most of them were later shown to be ineffective. However, a reliable evaluation of model robustness is mandatory for deployment in safety-critical real-world scenarios. We propose a simple yet effective modification to the gradient calculation of state-of-the-art first-order adversarial attacks, which increases their success rate and thus leads to more accurate robustness estimates. Normally, the gradient update of an attack is directly calculated for the given data point. In general, this approach is sensitive to noise and small local optima of the loss function. Inspired by gradient sampling techniques from non-convex optimization, we propose to calculate the gradient direction of the adversarial attack as the weighted average over multiple points in the local vicinity. We empirically show that by incorporating this additional gradient information, we are able to give a more accurate estimation of the global descent direction on noisy and non-convex loss surfaces. Additionally, we show that the proposed method achieves higher success rates than a variety of state-of-the-art attacks on the benchmark datasets MNIST, Fashion-MNIST, and CIFAR10.

READ FULL TEXT
research
02/02/2022

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

Deep learning models have been shown to be vulnerable to adversarial att...
research
07/13/2022

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
02/11/2020

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
06/03/2021

PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack

State-of-the-art deep neural networks are sensitive to small input pertu...
research
04/21/2020

Probabilistic Safety for Bayesian Neural Networks

We study probabilistic safety for Bayesian Neural Networks (BNNs) under ...
research
12/07/2018

Deep-RBF Networks Revisited: Robust Classification with Rejection

One of the main drawbacks of deep neural networks, like many other class...
research
08/01/2022

Attacking Adversarial Defences by Smoothing the Loss Landscape

This paper investigates a family of methods for defending against advers...

Please sign up or login with your details

Forgot password? Click here to reset