OTJR: Optimal Transport Meets Optimal Jacobian Regularization for Adversarial Robustness

03/21/2023
by   Binh M. Le, et al.
0

Deep neural networks are widely recognized as being vulnerable to adversarial perturbation. To overcome this challenge, developing a robust classifier is crucial. So far, two well-known defenses have been adopted to improve the learning of robust classifiers, namely adversarial training (AT) and Jacobian regularization. However, each approach behaves differently against adversarial perturbations. First, our work carefully analyzes and characterizes these two schools of approaches, both theoretically and empirically, to demonstrate how each approach impacts the robust learning of a classifier. Next, we propose our novel Optimal Transport with Jacobian regularization method, dubbed OTJR, jointly incorporating the input-output Jacobian regularization into the AT by leveraging the optimal transport theory. In particular, we employ the Sliced Wasserstein (SW) distance that can efficiently push the adversarial samples' representations closer to those of clean samples, regardless of the number of classes within the dataset. The SW distance provides the adversarial samples' movement directions, which are much more informative and powerful for the Jacobian regularization. Our extensive experiments demonstrate the effectiveness of our proposed method, which jointly incorporates Jacobian regularization into AT. Furthermore, we demonstrate that our proposed method consistently enhances the model's robustness with CIFAR-100 dataset under various adversarial attack settings, achieving up to 28.49

READ FULL TEXT

page 3

page 15

page 16

research
08/22/2021

Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations

Upon the discovery of adversarial attacks, robust models have become obl...
research
06/25/2023

Enhancing Adversarial Training via Reweighting Optimization Trajectory

Despite the fact that adversarial training has become the de facto metho...
research
06/05/2021

k-Mixup Regularization for Deep Learning via Optimal Transport

Mixup is a popular regularization technique for training deep neural net...
research
11/19/2018

Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding

Recent studies have demonstrated the vulnerability of deep convolutional...
research
09/07/2023

Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences

We introduce the ARMOR_D methods as novel approaches to enhancing the ad...
research
04/08/2019

Pushing the right boundaries matters! Wasserstein Adversarial Training for Label Noise

Noisy labels often occur in vision datasets, especially when they are is...
research
06/07/2023

Optimal Transport Model Distributional Robustness

Distributional robustness is a promising framework for training deep lea...

Please sign up or login with your details

Forgot password? Click here to reset