Adversarial Defense by Suppressing High-frequency Components

08/19/2019
by   Zhendong Zhang, et al.
0

Recent works show that deep neural networks trained on image classification dataset bias towards textures. Those models are easily fooled by applying small high-frequency perturbations to the clean images. In this paper, we learn robust image classification models by removing high-frequency components. Specifically, we develop a differentiable high-frequency suppression module based on the discrete Fourier transform (DFT). Combining with adversarial training, we won the 5th place in the IJCAI-2019 Alibaba Adversarial AI Challenge. Our code is available online.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset