Does Network Width Really Help Adversarial Robustness?

10/03/2020
by   Boxi Wu, et al.
0

Adversarial training is currently the most powerful defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. Yet, it remains elusive how does neural network width affects model robustness. In this paper, we carefully examine the relation between network width and model robustness. We present an intriguing phenomenon that the increased network width may not help robustness. Specifically, we show that the model robustness is closely related to both natural accuracy and perturbation stability, a new metric proposed in our paper to characterize the model's stability under adversarial perturbations. While better natural accuracy can be achieved on wider neural networks, the perturbation stability actually becomes worse, leading to a potentially worse overall model robustness. To understand the origin of this phenomenon, we further relate the perturbation stability with the network's local Lipschitznesss. By leveraging recent results on neural tangent kernels, we show that larger network width naturally leads to worse perturbation stability. This suggests that to fully unleash the power of wide model architecture, practitioners should adopt a larger regularization parameter for training wider networks. Experiments on benchmark datasets confirm that this strategy could indeed alleviate the perturbation stability issue and improve the state-of-the-art robust models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2019

Transfer of Adversarial Robustness Between Perturbation Types

We study the transfer of adversarial robustness of deep neural networks ...
research
10/15/2020

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Our goal is to understand why the robustness drops after conducting adve...
research
12/21/2019

Jacobian Adversarially Regularized Networks for Robustness

Adversarial examples are crafted with imperceptible perturbations with t...
research
05/08/2020

Towards Robustness against Unsuspicious Adversarial Examples

Despite the remarkable success of deep neural networks, significant conc...
research
12/08/2021

On visual self-supervision and its effect on model robustness

Recent self-supervision methods have found success in learning feature r...
research
10/20/2020

Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods

Deep neural networks have achieved state-of-the-art performance in a var...
research
12/05/2020

Robustness on Networks

We adopt the statistical framework on robustness proposed by Watson and ...

Please sign up or login with your details

Forgot password? Click here to reset