Aliasing is a Driver of Adversarial Attacks

Aliasing is a highly important concept in signal processing, as careful consideration of resolution changes is essential in ensuring transmission and processing quality of audio, image, and video. Despite this, up until recently aliasing has received very little consideration in Deep Learning, with all common architectures carelessly sub-sampling without considering aliasing effects. In this work, we investigate the hypothesis that the existence of adversarial perturbations is due in part to aliasing in neural networks. Our ultimate goal is to increase robustness against adversarial attacks using explainable, non-trained, structural changes only, derived from aliasing first principles. Our contributions are the following. First, we establish a sufficient condition for no aliasing for general image transformations. Next, we study sources of aliasing in common neural network layers, and derive simple modifications from first principles to eliminate or reduce it. Lastly, our experimental results show a solid link between anti-aliasing and adversarial attacks. Simply reducing aliasing already results in more robust classifiers, and combining anti-aliasing with robust training out-performs solo robust training on L_2 attacks with none or minimal losses in performance on L_∞ attacks.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 7

page 8

page 11

research
07/20/2020

Robust Tracking against Adversarial Attacks

While deep convolutional neural networks (CNNs) are vulnerable to advers...
research
08/01/2023

Training on Foveated Images Improves Robustness to Adversarial Attacks

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
12/30/2022

Defense Against Adversarial Attacks on Audio DeepFake Detection

Audio DeepFakes are artificially generated utterances created using deep...
research
03/31/2022

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond

Rain removal aims to remove rain streaks from images/videos and reduce t...
research
04/05/2017

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

A recent paper suggests that Deep Neural Networks can be protected from ...
research
11/24/2019

Robustness Metrics for Real-World Adversarial Examples

We explore metrics to evaluate the robustness of real-world adversarial ...
research
01/05/2018

Shielding Google's language toxicity model against adversarial attacks

Lack of moderation in online communities enables participants to incur i...

Please sign up or login with your details

Forgot password? Click here to reset