A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

02/27/2022
by   Tuan-Anh Bui, et al.
0

It is well-known that deep neural networks (DNNs) are susceptible to adversarial attacks, exposing a severe fragility of deep learning systems. As the result, adversarial training (AT) method, by incorporating adversarial examples during training, represents a natural and effective approach to strengthen the robustness of a DNN-based classifier. However, most AT-based methods, notably PGD-AT and TRADES, typically seek a pointwise adversary that generates the worst-case adversarial example by independently perturbing each data sample, as a way to "probe" the vulnerability of the classifier. Arguably, there are unexplored benefits in considering such adversarial effects from an entire distribution. To this end, this paper presents a unified framework that connects Wasserstein distributional robustness with current state-of-the-art AT methods. We introduce a new Wasserstein cost function and a new series of risk functions, with which we show that standard AT methods are special cases of their counterparts in our framework. This connection leads to an intuitive relaxation and generalization of existing AT methods and facilitates the development of a new family of distributional robustness AT-based algorithms. Extensive experiments show that our distributional robustness AT algorithms robustify further their standard AT counterparts in various settings.

READ FULL TEXT
research
06/16/2023

Wasserstein distributional robustness of neural networks

Deep neural networks are known to be vulnerable to adversarial attacks (...
research
02/14/2020

Adversarial Distributional Training for Robust Deep Learning

Adversarial training (AT) is among the most effective techniques to impr...
research
12/04/2019

Learning with Multiplicative Perturbations

Adversarial Training (AT) and Virtual Adversarial Training (VAT) are the...
research
06/07/2023

Optimal Transport Model Distributional Robustness

Distributional robustness is a promising framework for training deep lea...
research
09/15/2022

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

Several existing works study either adversarial or natural distributiona...
research
03/25/2022

Improving robustness of jet tagging algorithms with adversarial training

Deep learning is a standard tool in the field of high-energy physics, fa...
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...

Please sign up or login with your details

Forgot password? Click here to reset