Boosting Adversarial Attacks with Momentum

10/17/2017
by   Yinpeng Dong, et al.
0

Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because of the coupling of the attack ability and the transferability. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. We won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.

READ FULL TEXT
research
11/07/2018

CAAD 2018: Iterative Ensemble Adversarial Attack

Deep Neural Networks (DNNs) have recently led to significant improvement...
research
10/22/2020

Defense-guided Transferable Adversarial Attacks

Though deep neural networks perform challenging tasks excellently, they ...
research
05/01/2023

Attack-SAM: Towards Evaluating Adversarial Robustness of Segment Anything Model

Segment Anything Model (SAM) has attracted significant attention recentl...
research
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...
research
06/26/2020

Orthogonal Deep Models As Defense Against Black-Box Attacks

Deep learning has demonstrated state-of-the-art performance for a variet...
research
04/02/2019

Curls & Whey: Boosting Black-Box Adversarial Attacks

Image classifiers based on deep neural networks suffer from harassment c...
research
06/10/2019

Improved Adversarial Robustness via Logit Regularization Methods

While great progress has been made at making neural networks effective a...

Please sign up or login with your details

Forgot password? Click here to reset