Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum

05/10/2021
by   Tiangang Li, et al.
0

The deep learning algorithm has achieved great success in the field of computer vision, but some studies have pointed out that the deep learning model is vulnerable to attacks adversarial examples and makes false decisions. This challenges the further development of deep learning, and urges researchers to pay more attention to the relationship between adversarial examples attacks and deep learning security. This work focuses on adversarial examples, optimizes the generation of adversarial examples from the view of adversarial robustness, takes the perturbations added in adversarial examples as the optimization parameter. We propose RWR-NM-PGD attack algorithm based on random warm restart mechanism and improved Nesterov momentum from the view of gradient optimization. The algorithm introduces improved Nesterov momentum, using its characteristics of accelerating convergence and improving gradient update direction in optimization algorithm to accelerate the generation of adversarial examples. In addition, the random warm restart mechanism is used for optimization, and the projected gradient descent algorithm is used to limit the range of the generated perturbations in each warm restart, which can obtain better attack effect. Experiments on two public datasets show that the algorithm proposed in this work can improve the success rate of attacking deep learning models without extra time cost. Compared with the benchmark attack method, the algorithm proposed in this work can achieve better attack success rate for both normal training model and defense model. Our method has average attack success rate of 46.3077 higher than PGD. The attack results in 13 defense models show that the attack algorithm proposed in this work is superior to the benchmark algorithm in attack universality and transferability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
research
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...
research
03/22/2023

Wasserstein Adversarial Examples on Univariant Time Series Data

Adversarial examples are crafted by adding indistinguishable perturbatio...
research
06/30/2020

Generating Adversarial Examples with an Optimized Quality

Deep learning models are widely used in a range of application areas, su...
research
08/14/2020

Efficiently Constructing Adversarial Examples by Feature Watermarking

With the increasing attentions of deep learning models, attacks are also...
research
04/08/2018

Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples

Deep learning model is vulnerable to adversarial attack, which generates...
research
03/17/2023

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

With the development of adversarial attacks, adversairal examples have b...

Please sign up or login with your details

Forgot password? Click here to reset