On the Effect of Pruning on Adversarial Robustness

08/10/2021
by   Artur Jordao, et al.
0

Pruning is a well-known mechanism for reducing the computational cost of deep convolutional networks. However, studies have shown the potential of pruning as a form of regularization, which reduces overfitting and improves generalization. We demonstrate that this family of strategies provides additional benefits beyond computational performance and generalization. Our analyses reveal that pruning structures (filters and/or layers) from convolutional networks increase not only generalization but also robustness to adversarial images (natural images with content modified). Such achievements are possible since pruning reduces network capacity and provides regularization, which have been proven effective tools against adversarial images. In contrast to promising defense mechanisms that require training with adversarial images and careful regularization, we show that pruning obtains competitive results considering only natural images (e.g., the standard and low-cost training). We confirm these findings on several adversarial attacks and architectures; thus suggesting the potential of pruning as a novel defense mechanism against adversarial images.

READ FULL TEXT

page 3

page 6

page 7

page 8

page 10

page 11

page 12

page 13

research
03/29/2019

Second Rethinking of Network Pruning in the Adversarial Setting

It is well known that deep neural networks (DNNs) are vulnerable to adve...
research
10/25/2022

Pruning's Effect on Generalization Through the Lens of Training and Regularization

Practitioners frequently observe that pruning improves model generalizat...
research
09/11/2020

Achieving Adversarial Robustness via Sparsity

Network pruning has been known to produce compact models without much ac...
research
01/25/2023

When Layers Play the Lottery, all Tickets Win at Initialization

Pruning is a standard technique for reducing the computational cost of d...
research
01/09/2020

Campfire: Compressable, Regularization-Free, Structured Sparse Training for Hardware Accelerators

This paper studies structured sparse training of CNNs with a gradual pru...
research
06/01/2020

Pruning via Iterative Ranking of Sensitivity Statistics

With the introduction of SNIP [arXiv:1810.02340v2], it has been demonstr...
research
01/09/2020

Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators

This paper studies structured sparse training of CNNs with a gradual pru...

Please sign up or login with your details

Forgot password? Click here to reset