From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

12/02/2020
by   Nikhil Kapoor, et al.
4

Despite recent advancements, deep neural networks are not robust against adversarial perturbations. Many of the proposed adversarial defense approaches use computationally expensive training mechanisms that do not scale to complex real-world tasks such as semantic segmentation, and offer only marginal improvements. In addition, fundamental questions on the nature of adversarial perturbations and their relation to the network architecture are largely understudied. In this work, we study the adversarial problem from a frequency domain perspective. More specifically, we analyze discrete Fourier transform (DFT) spectra of several adversarial images and report two major findings: First, there exists a strong connection between a model architecture and the nature of adversarial perturbations that can be observed and addressed in the frequency domain. Second, the observed frequency patterns are largely image- and attack-type independent, which is important for the practical impact of any defense making use of such patterns. Motivated by these findings, we additionally propose an adversarial defense method based on the well-known Wiener filters that captures and suppresses adversarial frequencies in a data-driven manner. Our proposed method not only generalizes across unseen attacks but also beats five existing state-of-the-art methods across two models in a variety of attack settings.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 8

page 9

page 11

research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
03/03/2017

Adversarial Examples for Semantic Image Segmentation

Machine learning methods in general and Deep Neural Networks in particul...
research
05/18/2023

Towards an Accurate and Secure Detector against Adversarial Perturbations

The vulnerability of deep neural networks to adversarial perturbations h...
research
10/08/2019

SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations

Deep neural networks are susceptible to adversarial manipulations in the...
research
04/01/2019

Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses

This paper focuses on learning transferable adversarial examples specifi...
research
06/19/2020

Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples

Recent work (Ilyas et al, 2019) suggests that adversarial examples are f...
research
04/24/2020

Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty

Over the past decade, deep learning (DL) has been successfully applied t...

Please sign up or login with your details

Forgot password? Click here to reset