Generalizing Adversarial Examples by AdaBelief Optimizer

01/25/2021
by   Yixiang Wang, et al.
0

Recent research has proved that deep neural networks (DNNs) are vulnerable to adversarial examples, the legitimate input added with imperceptible and well-designed perturbations can fool DNNs easily in the testing stage. However, most of the existing adversarial attacks are difficult to fool adversarially trained models. To solve this issue, we propose an AdaBelief iterative Fast Gradient Sign Method (AB-FGSM) to generalize adversarial examples. By integrating AdaBelief optimization algorithm to I-FGSM, we believe that the generalization of adversarial examples will be improved, relying on the strong generalization of AdaBelief optimizer. To validate the effectiveness and transferability of adversarial examples generated by our proposed AB-FGSM, we conduct the white-box and black-box attacks on various single models and ensemble models. Compared with state-of-the-art attack methods, our proposed method can generate adversarial examples effectively in the white-box setting, and the transfer rate is 7

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
research
06/16/2019

Defending Against Adversarial Attacks Using Random Forests

As deep neural networks (DNNs) have become increasingly important and po...
research
02/01/2020

Towards Sharper First-Order Adversary with Quantized Gradients

Despite the huge success of Deep Neural Networks (DNNs) in a wide spectr...
research
07/09/2021

Learning to Detect Adversarial Examples Based on Class Scores

Given the increasing threat of adversarial attacks on deep neural networ...
research
02/03/2021

IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks

The widespread application of deep neural network (DNN) techniques is be...
research
05/07/2019

Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study

The recent success of brain-inspired deep neural networks (DNNs) in solv...
research
11/05/2019

DLA: Dense-Layer-Analysis for Adversarial Example Detection

In recent years Deep Neural Networks (DNNs) have achieved remarkable res...

Please sign up or login with your details

Forgot password? Click here to reset