Provable Unrestricted Adversarial Training without Compromise with Generalizability

01/22/2023
by   Lilin Zhang, et al.
1

Adversarial training (AT) is widely considered as the most promising strategy to defend against adversarial attacks and has drawn increasing interest from researchers. However, the existing AT methods still suffer from two challenges. First, they are unable to handle unrestricted adversarial examples (UAEs), which are built from scratch, as opposed to restricted adversarial examples (RAEs), which are created by adding perturbations bound by an l_p norm to observed examples. Second, the existing AT methods often achieve adversarial robustness at the expense of standard generalizability (i.e., the accuracy on natural examples) because they make a tradeoff between them. To overcome these challenges, we propose a unique viewpoint that understands UAEs as imperceptibly perturbed unobserved examples. Also, we find that the tradeoff results from the separation of the distributions of adversarial examples and natural examples. Based on these ideas, we propose a novel AT approach called Provable Unrestricted Adversarial Training (PUAT), which can provide a target classifier with comprehensive adversarial robustness against both UAE and RAE, and simultaneously improve its standard generalizability. Particularly, PUAT utilizes partially labeled data to achieve effective UAE generation by accurately capturing the natural data distribution through a novel augmented triple-GAN. At the same time, PUAT extends the traditional AT by introducing the supervised loss of the target classifier into the adversarial loss and achieves the alignment between the UAE distribution, the natural data distribution, and the distribution learned by the classifier, with the collaboration of the augmented triple-GAN. Finally, the solid theoretical analysis and extensive experiments conducted on widely-used benchmarks demonstrate the superiority of PUAT.

READ FULL TEXT

page 1

page 15

research
06/08/2022

Latent Boundary-guided Adversarial Training

Deep Neural Networks (DNNs) have recently achieved great success in many...
research
02/01/2021

Towards Speeding up Adversarial Training in Latent Spaces

Adversarial training is wildly considered as the most effective way to d...
research
03/06/2019

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Machine learning models, especially neural network (NN) classifiers, are...
research
11/15/2019

On Model Robustness Against Adversarial Examples

We study the model robustness against adversarial examples, referred to ...
research
10/30/2018

Improved Network Robustness with Adversary Critic

Ideally, what confuses neural network should be confusing to humans. How...
research
07/27/2020

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...
research
03/21/2021

Natural Perturbed Training for General Robustness of Neural Network Classifiers

We focus on the robustness of neural networks for classification. To per...

Please sign up or login with your details

Forgot password? Click here to reset