Bandlimiting Neural Networks Against Adversarial Attacks

05/30/2019
by   Yuping Lin, et al.
4

In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet dataset have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 95 adversarial samples generated by these methods without introducing any significant performance degradation (less than 1 images.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2019

Adversarial Defense by Suppressing High-frequency Components

Recent works show that deep neural networks trained on image classificat...
research
05/29/2023

Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition

Using Fourier analysis, we explore the robustness and vulnerability of g...
research
09/19/2019

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

We propose Absum, which is a regularization method for improving adversa...
research
02/01/2022

On Regularizing Coordinate-MLPs

We show that typical implicit regularization assumptions for deep neural...
research
02/17/2023

High-frequency Matters: An Overwriting Attack and defense for Image-processing Neural Network Watermarking

In recent years, there has been significant advancement in the field of ...
research
04/07/2021

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

Backdoor attacks have been considered a severe security threat to deep l...
research
05/05/2022

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

Adversarial attack perturbs an image with an imperceptible noise, leadin...

Please sign up or login with your details

Forgot password? Click here to reset