Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks

10/26/2021
by   Yonggan Fu, et al.
20

Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks, i.e., an imperceptible perturbation to the input can mislead DNNs trained on clean images into making erroneous predictions. To tackle this, adversarial training is currently the most effective defense method, by augmenting the training set with adversarial samples generated on the fly. Interestingly, we discover for the first time that there exist subnetworks with inborn robustness, matching or surpassing the robust accuracy of the adversarially trained networks with comparable model sizes, within randomly initialized networks without any model training, indicating that adversarial training on model weights is not indispensable towards adversarial robustness. We name such subnetworks Robust Scratch Tickets (RSTs), which are also by nature efficient. Distinct from the popular lottery ticket hypothesis, neither the original dense networks nor the identified RSTs need to be trained. To validate and understand this fascinating finding, we further conduct extensive experiments to study the existence and properties of RSTs under different models, datasets, sparsity patterns, and attacks, drawing insights regarding the relationship between DNNs' robustness and their initialization/overparameterization. Furthermore, we identify the poor adversarial transferability between RSTs of different sparsity ratios drawn from the same randomly initialized dense network, and propose a Random RST Switch (R2S) technique, which randomly switches between different RSTs, as a novel defense method built on top of RSTs. We believe our findings about RSTs have opened up a new perspective to study model robustness and extend the lottery ticket hypothesis.

READ FULL TEXT

page 9

page 18

page 19

research
02/14/2022

Finding Dynamics Preserving Adversarial Winning Tickets

Modern deep neural networks (DNNs) are vulnerable to adversarial attacks...
research
09/04/2021

Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

Adversarial attacks have been shown to be highly effective at degrading ...
research
10/07/2021

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

Deep neural networks (DNNs) are known to be vulnerable to adversarial at...
research
02/03/2022

Robust Binary Models by Pruning Randomly-initialized Networks

We propose ways to obtain robust models against adversarial attacks from...
research
08/31/2020

An Integrated Approach to Produce Robust Models with High Efficiency

Deep Neural Networks (DNNs) needs to be both efficient and robust for pr...
research
05/03/2019

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

The recent "Lottery Ticket Hypothesis" paper by Frankle & Carbin showed ...
research
11/17/2022

SparseVLR: A Novel Framework for Verified Locally Robust Sparse Neural Networks Search

The compute-intensive nature of neural networks (NNs) limits their deplo...

Please sign up or login with your details

Forgot password? Click here to reset