Thundernna: a white box adversarial attack

11/24/2021
by   Linfeng Ye, et al.
0

The existing work shows that the neural network trained by naive gradient-based optimization method is prone to adversarial attacks, adds small malicious on the ordinary input is enough to make the neural network wrong. At the same time, the attack against a neural network is the key to improving its robustness. The training against adversarial examples can make neural networks resist some kinds of adversarial attacks. At the same time, the adversarial attack against a neural network can also reveal some characteristics of the neural network, a complex high-dimensional non-linear function, as discussed in previous work. In This project, we develop a first-order method to attack the neural network. Compare with other first-order attacks, our method has a much higher success rate. Furthermore, it is much faster than second-order attacks and multi-steps first-order attacks.

READ FULL TEXT

page 5

page 6

page 7

research
09/27/2021

MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, whic...
research
02/20/2023

An Incremental Gray-box Physical Adversarial Attack on Neural Network Training

Neural networks have demonstrated remarkable success in learning and sol...
research
07/30/2023

On Neural Network approximation of ideal adversarial attack and convergence of adversarial training

Adversarial attacks are usually expressed in terms of a gradient-based o...
research
02/24/2023

HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks

Hypergraph neural networks (HGNN) have shown superior performance in var...
research
11/14/2021

Generating Band-Limited Adversarial Surfaces Using Neural Networks

Generating adversarial examples is the art of creating a noise that is a...
research
06/01/2020

Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods

We identify three common cases that lead to overestimation of adversaria...
research
05/24/2023

Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension

Despite their impressive performance in classification, neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset