Defense Against Adversarial Attacks with Saak Transform

08/06/2018
by   Sibo Song, et al.
0

Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR-10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis.

READ FULL TEXT

page 2

page 7

research
11/30/2018

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

Deep neural networks (DNNs) have been demonstrated to be vulnerable to a...
research
02/18/2020

TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

Recent studies have demonstrated that machine learning approaches like d...
research
10/15/2021

Adversarial Purification through Representation Disentanglement

Deep learning models are vulnerable to adversarial examples and make inc...
research
11/21/2020

A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks: caref...
research
08/12/2020

Defending Adversarial Examples via DNN Bottleneck Reinforcement

This paper presents a DNN bottleneck reinforcement scheme to alleviate t...
research
02/07/2019

Robustness Of Saak Transform Against Adversarial Attacks

Image classification is vulnerable to adversarial attacks. This work inv...
research
04/05/2021

Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification

Deep Learning models are highly susceptible to adversarial manipulations...

Please sign up or login with your details

Forgot password? Click here to reset