Adversarial Defense by Suppressing High-frequency Components

08/19/2019
by   Zhendong Zhang, et al.
0

Recent works show that deep neural networks trained on image classification dataset bias towards textures. Those models are easily fooled by applying small high-frequency perturbations to the clean images. In this paper, we learn robust image classification models by removing high-frequency components. Specifically, we develop a differentiable high-frequency suppression module based on the discrete Fourier transform (DFT). Combining with adversarial training, we won the 5th place in the IJCAI-2019 Alibaba Adversarial AI Challenge. Our code is available online.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2019

Bandlimiting Neural Networks Against Adversarial Attacks

In this paper, we study the adversarial attack and defence problem in de...
research
07/19/2023

Towards Building More Robust Models with Frequency Bias

The vulnerability of deep neural networks to adversarial samples has bee...
research
05/06/2022

Investigating and Explaining the Frequency Bias in Image Classification

CNNs exhibit many behaviors different from humans, one of which is the c...
research
01/12/2023

Phase-shifted Adversarial Training

Adversarial training has been considered an imperative component for saf...
research
12/03/2020

Image inpainting using frequency domain priors

In this paper, we present a novel image inpainting technique using frequ...
research
02/28/2019

On the Effectiveness of Low Frequency Perturbations

Carefully crafted, often imperceptible, adversarial perturbations have b...
research
04/05/2021

Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification

Deep Learning models are highly susceptible to adversarial manipulations...

Please sign up or login with your details

Forgot password? Click here to reset