DeepAI AI Chat
Log In Sign Up

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

10/02/2022
by   Jiancong Xiao, et al.
The Chinese University of Hong Kong, Shenzhen
4

Adversarial Training (AT) has been demonstrated as one of the most effective methods against adversarial examples. While most existing works focus on AT with a single type of perturbation e.g., the ℓ_∞ attacks), DNNs are facing threats from different types of adversarial examples. Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in ℓ_1, ℓ_2, and ℓ_∞ norm-bounded perturbations). However, the resulting model exhibits trade-off between different attacks. Meanwhile, there is no theoretical analysis of ATMP, limiting its further development. In this paper, we first provide the smoothness analysis of ATMP and show that ℓ_1, ℓ_2, and ℓ_∞ adversaries give different contributions to the smoothness of the loss function of ATMP. Based on this, we develop the stability-based excess risk bounds and propose adaptive smoothness-weighted adversarial training for multiple perturbations. Theoretically, our algorithm yields better bounds. Empirically, our experiments on CIFAR10 and CIFAR100 achieve the state-of-the-art performance against the mixture of multiple perturbations attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/30/2019

Adversarial Training and Robustness for Multiple Perturbations

Defenses against adversarial examples, such as adversarial training, are...
02/09/2022

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Model robustness against adversarial examples of single perturbation typ...
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
02/20/2023

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

Adversarial training is widely used to make classifiers robust to a spec...
11/09/2019

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...
03/15/2021

Adversarial Training is Not Ready for Robot Learning

Adversarial training is an effective method to train deep learning model...
10/03/2020

Does Network Width Really Help Adversarial Robustness?

Adversarial training is currently the most powerful defense against adve...