DeepAI AI Chat
Log In Sign Up

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

by   Jiancong Xiao, et al.
The Chinese University of Hong Kong, Shenzhen

Adversarial Training (AT) has been demonstrated as one of the most effective methods against adversarial examples. While most existing works focus on AT with a single type of perturbation e.g., the ℓ_∞ attacks), DNNs are facing threats from different types of adversarial examples. Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in ℓ_1, ℓ_2, and ℓ_∞ norm-bounded perturbations). However, the resulting model exhibits trade-off between different attacks. Meanwhile, there is no theoretical analysis of ATMP, limiting its further development. In this paper, we first provide the smoothness analysis of ATMP and show that ℓ_1, ℓ_2, and ℓ_∞ adversaries give different contributions to the smoothness of the loss function of ATMP. Based on this, we develop the stability-based excess risk bounds and propose adaptive smoothness-weighted adversarial training for multiple perturbations. Theoretically, our algorithm yields better bounds. Empirically, our experiments on CIFAR10 and CIFAR100 achieve the state-of-the-art performance against the mixture of multiple perturbations attacks.


page 1

page 2

page 3

page 4


Adversarial Training and Robustness for Multiple Perturbations

Defenses against adversarial examples, such as adversarial training, are...

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Model robustness against adversarial examples of single perturbation typ...

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

Adversarial training is widely used to make classifiers robust to a spec...

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...

Adversarial Training is Not Ready for Robot Learning

Adversarial training is an effective method to train deep learning model...

Does Network Width Really Help Adversarial Robustness?

Adversarial training is currently the most powerful defense against adve...