Wavelets Beat Monkeys at Adversarial Robustness

04/19/2023
by   Jingtong Su, et al.
0

Research on improving the robustness of neural networks to adversarial noise - imperceptible malicious perturbations of the data - has received significant attention. The currently uncontested state-of-the-art defense to obtain robust deep neural networks is Adversarial Training (AT), but it consumes significantly more resources compared to standard training and trades off accuracy for robustness. An inspiring recent work [Dapello et al.] aims to bring neurobiological tools to the question: How can we develop Neural Nets that robustly generalize like human vision? [Dapello et al.] design a network structure with a neural hidden first layer that mimics the primate primary visual cortex (V1), followed by a back-end structure adapted from current CNN vision models. It seems to achieve non-trivial adversarial robustness on standard vision benchmarks when tested on small perturbations. Here we revisit this biologically inspired work, and ask whether a principled parameter-free representation with inspiration from physics is able to achieve the same goal. We discover that the wavelet scattering transform can replace the complex V1-cortex and simple uniform Gaussian noise can take the role of neural stochasticity, to achieve adversarial robustness. In extensive experiments on the CIFAR-10 benchmark with adaptive adversarial attacks we show that: 1) Robustness of VOneBlock architectures is relatively weak (though non-zero) when the strength of the adversarial attack radius is set to commonly used benchmarks. 2) Replacing the front-end VOneBlock by an off-the-shelf parameter-free Scatternet followed by simple uniform Gaussian noise can achieve much more substantial adversarial robustness without adversarial training. Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2020

Towards adversarial robustness with 01 loss neural networks

Motivated by the general robustness properties of the 01 loss we propose...
research
07/01/2019

Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network"

A recent paper by Liu et al. combines the topics of adversarial training...
research
10/17/2019

Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets

Adversarial training is by far the most successful strategy for improvin...
research
07/24/2022

Can we achieve robustness from data alone?

Adversarial training and its variants have come to be the prevailing met...
research
12/08/2020

On 1/n neural representation and robustness

Understanding the nature of representation in neural networks is a goal ...
research
01/15/2019

The Limitations of Adversarial Training and the Blind-Spot Attack

The adversarial training procedure proposed by Madry et al. (2018) is on...
research
11/17/2020

Extreme Value Preserving Networks

Recent evidence shows that convolutional neural networks (CNNs) are bias...

Please sign up or login with your details

Forgot password? Click here to reset