Large Norms of CNN Layers Do Not Hurt Adversarial Robustness

09/17/2020
by   Youwei Liang, et al.
15

Since the Lipschitz properties of convolutional neural network (CNN) are widely considered to be related to adversarial robustness, we theoretically characterize the ℓ_1 norm and ℓ_∞ norm of 2D multi-channel convolutional layers and provide efficient methods to compute the exact ℓ_1 norm and ℓ_∞ norm. Based on our theorem, we propose a novel regularization method termed norm decay, which can effectively reduce the norms of CNN layers. Experiments show that norm-regularization methods, including norm decay, weight decay, and singular value clipping, can improve generalization of CNNs. However, we are surprised to find that they can slightly hurt adversarial robustness. Furthermore, we compute the norms of layers in the CNNs trained with three different adversarial training frameworks and find that adversarially robust CNNs have comparable or even larger norms than their non-adversarially robust counterparts. Moreover, we prove that under a mild assumption, adversarially robust classifiers can be achieved with neural networks and an adversarially robust neural network can have arbitrarily large Lipschitz constant. For these reasons, enforcing small norms of CNN layers may be neither effective nor necessary in achieving adversarial robustness. Our code is available at https://github.com/youweiliang/norm_robustness.

READ FULL TEXT

page 19

page 20

page 21

page 22

research
04/14/2021

Orthogonalizing Convolutional Layers with the Cayley Transform

Recent work has highlighted several advantages of enforcing orthogonalit...
research
11/24/2022

Towards Practical Control of Singular Values of Convolutional Layers

In general, convolutional neural networks (CNNs) are easy to train, but ...
research
11/22/2019

Bounding Singular Values of Convolution Layers

In deep neural networks, the spectral norm of the Jacobian of a layer bo...
research
05/21/2018

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

In this paper, we introduce a novel regularization method called Adversa...
research
05/25/2023

Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration

Since the control of the Lipschitz constant has a great impact on the tr...
research
03/16/2022

Provable Adversarial Robustness for Fractional Lp Threat Models

In recent years, researchers have extensively studied adversarial robust...
research
09/19/2019

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

We propose Absum, which is a regularization method for improving adversa...

Please sign up or login with your details

Forgot password? Click here to reset