Adversarial Attack Based on Prediction-Correction

06/02/2023
by   Chen Wan, et al.
0

Deep neural networks (DNNs) are vulnerable to adversarial examples obtained by adding small perturbations to original examples. The added perturbations in existing attacks are mainly determined by the gradient of the loss function with respect to the inputs. In this paper, the close relationship between gradient-based attacks and the numerical methods for solving ordinary differential equation (ODE) is studied for the first time. Inspired by the numerical solution of ODE, a new prediction-correction (PC) based adversarial attack is proposed. In our proposed PC-based attack, some existing attack can be selected to produce a predicted example first, and then the predicted example and the current example are combined together to determine the added perturbations. The proposed method possesses good extensibility and can be applied to all available gradient-based attacks easily. Extensive experiments demonstrate that compared with the state-of-the-art gradient-based adversarial attacks, our proposed PC-based attacks have higher attack success rates, and exhibit better transferability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
research
05/19/2022

Focused Adversarial Attacks

Recent advances in machine learning show that neural models are vulnerab...
research
03/11/2022

Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers

Recently, it has been shown that, in spite of the significant performanc...
research
10/28/2022

Improving Transferability of Adversarial Examples on Face Recognition with Beneficial Perturbation Feature Augmentation

Face recognition (FR) models can be easily fooled by adversarial example...
research
11/15/2019

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
12/28/2021

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Minimal adversarial perturbations added to inputs have been shown to be ...
research
11/08/2021

Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks

In this paper, the bias classifier is introduced, that is, the bias part...

Please sign up or login with your details

Forgot password? Click here to reset