Can pruning improve certified robustness of neural networks?

06/15/2022
by   Zhangheng Li, et al.
5

With the rapid development of deep learning, the sizes of neural networks become larger and larger so that the training and inference often overwhelm the hardware resources. Given the fact that neural networks are often over-parameterized, one effective way to reduce such computational overhead is neural network pruning, by removing redundant parameters from trained neural networks. It has been recently observed that pruning can not only reduce computational overhead but also can improve empirical robustness of deep neural networks (NNs), potentially owing to removing spurious correlations while preserving the predictive accuracies. This paper for the first time demonstrates that pruning can generally improve certified robustness for ReLU-based NNs under the complete verification setting. Using the popular Branch-and-Bound (BaB) framework, we find that pruning can enhance the estimated bound tightness of certified robustness verification, by alleviating linear relaxation and sub-domain split problems. We empirically verify our findings with off-the-shelf pruning methods and further present a new stability-based pruning method tailored for reducing neuron instability, that outperforms existing pruning methods in enhancing certified robustness. Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2 adversarial training on the CIFAR10 dataset. We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models across different datasets. Our findings offer a new angle to study the intriguing interaction between sparsity and robustness, i.e. interpreting the interaction of sparsity and certified robustness via neuron stability. Codes are available at: https://github.com/VITA-Group/CertifiedPruning.

READ FULL TEXT

page 1

page 4

page 9

research
02/14/2022

Finding Dynamics Preserving Adversarial Winning Tickets

Modern deep neural networks (DNNs) are vulnerable to adversarial attacks...
research
06/15/2022

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

Certifiable robustness is a highly desirable property for adopting deep ...
research
12/05/2019

The Search for Sparse, Robust Neural Networks

Recent work on deep neural network pruning has shown there exist sparse ...
research
09/11/2020

Achieving Adversarial Robustness via Sparsity

Network pruning has been known to produce compact models without much ac...
research
06/14/2019

Towards Compact and Robust Deep Neural Networks

Deep neural networks have achieved impressive performance in many applic...
research
03/11/2021

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification

Recent works in neural network verification show that cheap incomplete v...
research
09/09/2018

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

We explore the concept of co-design in the context of neural network ver...

Please sign up or login with your details

Forgot password? Click here to reset