Bridging the Performance Gap between FGSM and PGD Adversarial Training

11/07/2020
by   Tianjin Huang, et al.
9

Deep learning achieves state-of-the-art performance in many tasks but exposes to the underlying vulnerability against adversarial examples. Across existing defense techniques, adversarial training with the projected gradient decent attack (adv.PGD) is considered as one of the most effective ways to achieve moderate adversarial robustness. However, adv.PGD requires too much training time since the projected gradient attack (PGD) takes multiple iterations to generate perturbations. On the other hand, adversarial training with the fast gradient sign method (adv.FGSM) takes much less training time since the fast gradient sign method (FGSM) takes one step to generate perturbations but fails to increase adversarial robustness. In this work, we extend adv.FGSM to make it achieve the adversarial robustness of adv.PGD. We demonstrate that the large curvature along FGSM perturbed direction leads to a large difference in performance of adversarial robustness between adv.FGSM and adv.PGD, and therefore propose combining adv.FGSM with a curvature regularization (adv.FGSMR) in order to bridge the performance gap between adv.FGSM and adv.PGD. The experiments show that adv.FGSMR has higher training efficiency than adv.PGD. In addition, it achieves comparable performance of adversarial robustness on MNIST dataset under white-box attack, and it achieves better performance than adv.PGD under white-box attack and effectively defends the transferable adversarial attack on CIFAR-10 dataset.

READ FULL TEXT
research
10/08/2018

Efficient Two-Step Adversarial Defense for Deep Neural Networks

In recent years, deep neural networks have demonstrated outstanding perf...
research
11/23/2018

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

Research on adversarial examples in computer vision tasks has shown that...
research
12/23/2021

Adaptive Modeling Against Adversarial Attacks

Adversarial training, the process of training a deep learning model with...
research
11/26/2018

EnResNet: ResNet Ensemble via the Feynman-Kac Formalism

We propose a simple yet powerful ResNet ensemble algorithm which consist...
research
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
research
05/31/2021

Robustifying ℓ_∞ Adversarial Training to the Union of Perturbation Models

Classical adversarial training (AT) frameworks are designed to achieve h...
research
08/13/2019

Improving Generalization in Coreference Resolution via Adversarial Training

In order for coreference resolution systems to be useful in practice, th...

Please sign up or login with your details

Forgot password? Click here to reset