Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

06/15/2022
by   Ruize Gao, et al.
0

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available. However, the high computational cost (e.g., 100 times more than that of the project gradient descent attack) makes AA infeasible for practitioners with limited computational resources, and also hinders applications of AA in the adversarial training (AT). In this paper, we propose a novel method, minimum-margin (MM) attack, to fast and reliably evaluate adversarial robustness. Compared with AA, our method achieves comparable performance but only costs 3 computational time in extensive experiments. The reliability of our method lies in that we evaluate the quality of adversarial examples using the margin between two targets that can precisely identify the most adversarial example. The computational efficiency of our method lies in an effective Sequential TArget Ranking Selection (STARS) method, ensuring that the cost of the MM attack is independent of the number of classes. The MM attack opens a new way for evaluating adversarial robustness and provides a feasible and reliable way to generate high-quality adversarial examples in AT.

READ FULL TEXT

page 2

page 7

page 8

page 15

page 16

page 17

page 18

page 19

research
03/14/2020

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

We study the robustness against adversarial examples of kNN classifiers ...
research
08/09/2020

Fast Gradient Projection Method for Text Adversary Generation and Adversarial Training

Adversarial training has shown effectiveness and efficiency in improving...
research
11/01/2022

The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

Although current deep learning techniques have yielded superior performa...
research
02/11/2022

FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation

Automatic Speech Recognition services (ASRs) inherit deep neural network...
research
02/19/2021

Effective and Efficient Vote Attack on Capsule Networks

Standard Convolutional Neural Networks (CNNs) can be easily fooled by im...
research
03/29/2022

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Neural image caption generation (NICG) models have received massive atte...
research
03/05/2023

Cyber Vaccine for Deepfake Immunity

Deepfakes pose an evolving threat to cybersecurity, which calls for the ...

Please sign up or login with your details

Forgot password? Click here to reset