DeepAI
Log In Sign Up

On the Limitations of Denoising Strategies as Adversarial Defenses

12/17/2020
by   Zhonghan Niu, et al.
0

As adversarial attacks against machine learning models have raised increasing concerns, many denoising-based defense approaches have been proposed. In this paper, we summarize and analyze the defense strategies in the form of symmetric transformation via data denoising and reconstruction (denoted as F+ inverse F, F-IF Framework). In particular, we categorize these denoising strategies from three aspects (i.e. denoising in the spatial domain, frequency domain, and latent space, respectively). Typically, defense is performed on the entire adversarial example, both image and perturbation are modified, making it difficult to tell how it defends against the perturbations. To evaluate the robustness of these denoising strategies intuitively, we directly apply them to defend against adversarial noise itself (assuming we have obtained all of it), which saving us from sacrificing benign accuracy. Surprisingly, our experimental results show that even if most of the perturbations in each dimension is eliminated, it is still difficult to obtain satisfactory robustness. Based on the above findings and analyses, we propose the adaptive compression strategy for different frequency bands in the feature domain to improve the robustness. Our experiment results show that the adaptive compression strategies enable the model to better suppress adversarial perturbations, and improve robustness compared with existing denoising strategies.

READ FULL TEXT
04/03/2021

Mitigating Gradient-based Adversarial Attacks via Denoising and Compression

Gradient-based adversarial attacks on deep neural networks pose a seriou...
04/05/2020

Approximate Manifold Defense Against Multiple Adversarial Perturbations

Existing defenses against adversarial attacks are typically tailored to ...
08/29/2022

Towards Adversarial Purification using Denoising AutoEncoders

With the rapid advancement and increased use of deep learning models in ...
12/03/2020

Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization

There is now extensive evidence demonstrating that deep neural networks ...
01/14/2021

Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation

This paper presents a novel context-aware image denoising algorithm that...
05/06/2020

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

Input transformation based defense strategies fall short in defending ag...
08/08/2019

Defending Against Adversarial Iris Examples Using Wavelet Decomposition

Deep neural networks have presented impressive performance in biometric ...