Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

03/27/2019
by   Francesco Croce, et al.
0

Modern neural networks are highly non-robust against adversarial manipulation. A significant amount of work has been invested in techniques to compute lower bounds on robustness through formal guarantees and to build provably robust model. However it is still difficult to apply them to larger networks or in order to get robustness against larger perturbations. Thus attack strategies are needed to provide tight upper bounds on the actual robustness. We significantly improve the randomized gradient-free attack for ReLU networks [9], in particular by scaling it up to large networks. We show that our attack achieves similar or significantly smaller robust accuracy than state-of-the-art attacks like PGD or the one of Carlini and Wagner, thus revealing an overestimation of the robustness by these state-of-the-art methods. Our attack is not based on a gradient descent scheme and in this sense gradient-free, which makes it less sensitive to the choice of hyperparameters as no careful selection of the stepsize is required.

READ FULL TEXT

page 13

page 14

research
07/06/2021

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

Deep learning is vulnerable to adversarial examples. Many defenses based...
research
05/27/2019

Provable robustness against all adversarial l_p-perturbations for p≥ 1

In recent years several adversarial attacks and defenses have been propo...
research
03/10/2023

Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks

Graph neural networks (GNNs) have achieved state-of-the-art performance ...
research
02/01/2019

Robustness Certificates Against Adversarial Examples for ReLU Networks

While neural networks have achieved high performance in different learni...
research
11/28/2018

A randomized gradient-free attack on ReLU networks

It has recently been shown that neural networks but also other classifie...
research
05/12/2020

Robustness Verification for Classifier Ensembles

We give a formal verification procedure that decides whether a classifie...
research
03/28/2023

Robustness of Complex Networks Considering Load and Cascading Failure under Edge-removal Attack

In the understanding of important edges in complex networks, the edges w...

Please sign up or login with your details

Forgot password? Click here to reset