Revisiting DeepFool: generalization and improvement

Deep neural networks have been known to be vulnerable to adversarial examples, which are inputs that are modified slightly to fool the network into making incorrect predictions. This has led to a significant amount of research on evaluating the robustness of these networks against such perturbations. One particularly important robustness metric is the robustness to minimal l2 adversarial perturbations. However, existing methods for evaluating this robustness metric are either computationally expensive or not very accurate. In this paper, we introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency. Our proposed attacks are generalizations of the well-known DeepFool (DF) attack, while they remain simple to understand and implement. We demonstrate that our attacks outperform existing methods in terms of both effectiveness and computational efficiency. Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal l2 adversarial perturbations.

READ FULL TEXT
research
02/19/2018

Robustness of Rotation-Equivariant Networks to Adversarial Perturbations

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
02/18/2021

Random Projections for Improved Adversarial Robustness

We propose two training techniques for improving the robustness of Neura...
research
06/16/2020

On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron

Deep neural networks have achieved human-level accuracy on almost all pe...
research
03/29/2022

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Neural image caption generation (NICG) models have received massive atte...
research
05/04/2020

Robust Encodings: A Framework for Combating Adversarial Typos

Despite excellent performance on many tasks, NLP systems are easily fool...
research
03/23/2019

Improving Adversarial Robustness via Guided Complement Entropy

Model robustness has been an important issue, since adding small adversa...
research
09/18/2023

Evaluating Adversarial Robustness with Expected Viable Performance

We introduce a metric for evaluating the robustness of a classifier, wit...

Please sign up or login with your details

Forgot password? Click here to reset