Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing

03/24/2023
by   Lin Li, et al.
0

Deep neural networks can be easily fooled into making incorrect predictions through corruption of the input by adversarial perturbations: human-imperceptible artificial noise. So far adversarial training has been the most successful defense against such adversarial attacks. This work focuses on improving adversarial training to boost adversarial robustness. We first analyze, from an instance-wise perspective, how adversarial vulnerability evolves during adversarial training. We find that during training an overall reduction of adversarial loss is achieved by sacrificing a considerable proportion of training samples to be more vulnerable to adversarial attack, which results in an uneven distribution of adversarial vulnerability among data. Such "uneven vulnerability", is prevalent across several popular robust training methods and, more importantly, relates to overfitting in adversarial training. Motivated by this observation, we propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training (ISEAT). It jointly smooths both input and weight loss landscapes in an adaptive, instance-specific, way to enhance robustness more for those samples with higher adversarial vulnerability. Extensive experiments demonstrate the superiority of our method over existing defense methods. Noticeably, our method, when combined with the latest data augmentation and semi-supervised learning techniques, achieves state-of-the-art robustness against ℓ_∞-norm constrained attacks on CIFAR10 of 59.32 61.55 https://github.com/TreeLLi/Instance-adaptive-Smoothness-Enhanced-AT.

READ FULL TEXT
research
04/07/2021

Universal Adversarial Training with Class-Wise Perturbations

Despite their overwhelming success on a wide range of applications, conv...
research
03/16/2020

Toward Adversarial Robustness via Semi-supervised Robust Training

Adversarial examples have been shown to be the severe threat to deep neu...
research
03/02/2021

Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial Training

In this technical report, we evaluate the adversarial robustness of a ve...
research
08/26/2021

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

Adversarial defenses train deep neural networks to be invariant to the i...
research
03/25/2022

Improving robustness of jet tagging algorithms with adversarial training

Deep learning is a standard tool in the field of high-energy physics, fa...
research
10/27/2019

Adversarial Defense Via Local Flatness Regularization

Adversarial defense is a popular and important research area. Due to its...
research
02/05/2018

Adversarial Vulnerability of Neural Networks Increases With Input Dimension

Over the past four years, neural networks have proven vulnerable to adve...

Please sign up or login with your details

Forgot password? Click here to reset