A randomized gradient-free attack on ReLU networks

11/28/2018
by   Francesco Croce, et al.
0

It has recently been shown that neural networks but also other classifiers are vulnerable to so called adversarial attacks e.g. in object recognition an almost non-perceivable change of the image changes the decision of the classifier. Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard. While methods based on mixed-integer optimization which find the optimal adversarial input have been developed, they do not scale to large networks. Currently, the attack scheme proposed by Carlini and Wagner is considered to produce the best adversarial inputs. In this paper we propose a new attack scheme for the class of ReLU networks based on a direct optimization on the resulting linear regions. In our experimental validation we improve in all except one experiment out of 18 over the Carlini-Wagner attack with a relative improvement of up to 9%. As our approach is based on the geometrical structure of ReLU networks, it is less susceptible to defences targeting their functional properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories

Deep neural networks are vulnerable to adversarial attacks....
research
10/21/2020

Boosting Gradient for White-Box Adversarial Attacks

Deep neural networks (DNNs) are playing key roles in various artificial ...
research
07/20/2023

Adversarial attacks for mixtures of classifiers

Mixtures of classifiers (a.k.a. randomized ensembles) have been proposed...
research
03/27/2019

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

Modern neural networks are highly non-robust against adversarial manipul...
research
01/27/2023

Certified Invertibility in Neural Networks via Mixed-Integer Programming

Neural networks are notoriously vulnerable to adversarial attacks – smal...
research
02/07/2020

RAID: Randomized Adversarial-Input Detection for Neural Networks

In recent years, neural networks have become the default choice for imag...
research
10/08/2018

Combinatorial Attacks on Binarized Neural Networks

Binarized Neural Networks (BNNs) have recently attracted significant int...

Please sign up or login with your details

Forgot password? Click here to reset