SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain

03/04/2021
by   Paula Harder, et al.
30

Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods.

READ FULL TEXT

page 1

page 5

research
02/23/2022

LPF-Defense: 3D Adversarial Defense based on Frequency Analysis

Although 3D point cloud classification has recently been widely deployed...
research
08/24/2022

Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps

The existence of adversarial attacks on convolutional neural networks (C...
research
02/09/2021

Benford's law: what does it say on adversarial images?

Convolutional neural networks (CNNs) are fragile to small perturbations ...
research
01/08/2019

Interpretable BoW Networks for Adversarial Example Detection

The standard approach to providing interpretability to deep convolutiona...
research
10/06/2021

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Convolutional Neural Networks (CNNs) have become the de facto gold stand...
research
12/10/2019

Feature Losses for Adversarial Robustness

Deep learning has made tremendous advances in computer vision tasks such...
research
11/06/2017

HyperNetworks with statistical filtering for defending adversarial examples

Deep learning algorithms have been known to be vulnerable to adversarial...

Please sign up or login with your details

Forgot password? Click here to reset