Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

09/19/2019
by   Sekitoshi Kanai, et al.
10

We propose Absum, which is a regularization method for improving adversarial robustness of convolutional neural networks (CNNs). Although CNNs can accurately recognize images, recent studies have shown that the convolution operations in CNNs commonly have structural sensitivity to specific noise composed of Fourier basis functions. By exploiting this sensitivity, they proposed a simple black-box adversarial attack: Single Fourier attack. To reduce structural sensitivity, we can use regularization of convolution filter weights since the sensitivity of linear transform can be assessed by the norm of the weights. However, standard regularization methods can prevent minimization of the loss function because they impose a tight constraint for obtaining high robustness. To solve this problem, Absum imposes a loose constraint; it penalizes the absolute values of the summation of the parameters in the convolution layers. Absum can improve robustness against single Fourier attack while being as simple and efficient as standard regularization methods (e.g., weight decay and L1 regularization). Our experiments demonstrate that Absum improves robustness against single Fourier attack more than standard regularization methods. Furthermore, we reveal that robust CNNs with Absum are more robust against transferred attacks due to decreasing the common sensitivity and against high-frequency noise than standard regularization methods. We also reveal that Absum can improve robustness against gradient-based attacks (projected gradient descent) when used with adversarial training.

READ FULL TEXT

page 3

page 13

research
01/31/2023

Fourier Sensitivity and Regularization of Computer Vision Models

Recent work has empirically shown that deep neural networks latch on to ...
research
05/30/2019

Bandlimiting Neural Networks Against Adversarial Attacks

In this paper, we study the adversarial attack and defence problem in de...
research
09/17/2020

Large Norms of CNN Layers Do Not Hurt Adversarial Robustness

Since the Lipschitz properties of convolutional neural network (CNN) are...
research
05/30/2019

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Deep Neural Network (DNN) trained by the gradient descent method is know...
research
11/06/2017

HyperNetworks with statistical filtering for defending adversarial examples

Deep learning algorithms have been known to be vulnerable to adversarial...
research
12/07/2021

Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

Vision transformers (ViTs) have demonstrated impressive performance and ...
research
08/29/2020

Improving Resistance to Adversarial Deformations by Regularizing Gradients

Improving the resistance of deep neural networks against adversarial att...

Please sign up or login with your details

Forgot password? Click here to reset