Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization

02/03/2022
by   Xiaojun Xu, et al.
0

Machine learning (ML) robustness and domain generalization are fundamentally correlated: they essentially concern data distribution shifts under adversarial and natural settings, respectively. On one hand, recent studies show that more robust (adversarially trained) models are more generalizable. On the other hand, there is a lack of theoretical understanding of their fundamental connections. In this paper, we explore the relationship between regularization and domain transferability considering different factors such as norm regularization and data augmentations (DA). We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability. Our analysis implies that "robustness" is neither necessary nor sufficient for transferability; rather, robustness induced by adversarial training is a by-product of such function class regularization. We then discuss popular DA protocols and show when they can be viewed as the function class regularization under certain conditions and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings and show several counterexamples where robustness and generalization are negatively correlated on different datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness

Adversarial Transferability is an intriguing property of adversarial exa...
research
07/15/2023

Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

Adversarial examples (AEs) for DNNs have been shown to be transferable: ...
research
06/25/2020

Does Adversarial Transferability Indicate Knowledge Transferability?

Despite the immense success that deep neural networks (DNNs) have achiev...
research
04/16/2019

Reducing Adversarial Example Transferability Using Gradient Regularization

Deep learning algorithms have increasingly been shown to lack robustness...
research
04/11/2017

The Space of Transferable Adversarial Examples

Adversarial examples are maliciously perturbed inputs designed to mislea...
research
10/06/2020

Constraining Logits by Bounded Function for Adversarial Robustness

We propose a method for improving adversarial robustness by addition of ...
research
10/10/2022

The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective

Data augmentation (DA) is a powerful workhorse for bolstering performanc...

Please sign up or login with your details

Forgot password? Click here to reset