The Search for Sparse, Robust Neural Networks

12/05/2019
by   Justin Cosentino, et al.
0

Recent work on deep neural network pruning has shown there exist sparse subnetworks that achieve equal or improved accuracy, training time, and loss using fewer network parameters when compared to their dense counterparts. Orthogonal to pruning literature, deep neural networks are known to be susceptible to adversarial examples, which may pose risks in security- or safety-critical applications. Intuition suggests that there is an inherent trade-off between sparsity and robustness such that these characteristics could not co-exist. We perform an extensive empirical evaluation and analysis testing the Lottery Ticket Hypothesis with adversarial training and show this approach enables us to find sparse, robust neural networks. Code for reproducing experiments is available here: https://github.com/justincosentino/robust-sparse-networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2020

On Pruning Adversarially Robust Neural Networks

In safety-critical but computationally resource-constrained applications...
research
11/17/2022

SparseVLR: A Novel Framework for Verified Locally Robust Sparse Neural Networks Search

The compute-intensive nature of neural networks (NNs) limits their deplo...
research
03/06/2020

Towards Practical Lottery Ticket Hypothesis for Adversarial Training

Recent research has proposed the lottery ticket hypothesis, suggesting t...
research
03/02/2020

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

Deep neural nets (DNNs) compression is crucial for adaptation to mobile ...
research
03/08/2022

Dual Lottery Ticket Hypothesis

Fully exploiting the learning capacity of neural networks requires overp...
research
06/15/2022

Can pruning improve certified robustness of neural networks?

With the rapid development of deep learning, the sizes of neural network...
research
09/22/2022

EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models

Neural models are known to be over-parameterized, and recent work has sh...

Please sign up or login with your details

Forgot password? Click here to reset