Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

06/15/2022
by   Tianlong Chen, et al.
4

Certifiable robustness is a highly desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios, but often demands tedious computations to establish. The main hurdle lies in the massive amount of non-linearity in large DNNs. To trade off the DNN expressiveness (which calls for more non-linearity) and robustness certification scalability (which prefers more linearity), we propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity. The core of our proposal is to first linearize insignificant ReLU neurons, to eliminate the non-linear components that are both redundant for DNN performance and harmful to its certification. We then optimize the associated slopes and intercepts of the replaced linear activations for restoring model performance while maintaining certifiability. Hence, typical neuron pruning could be viewed as a special case of grafting a linear function of the fixed zero slopes and intercept, that might overly restrict the network flexibility and sacrifice its performance. Extensive experiments on multiple datasets and network backbones show that our linearity grafting can (1) effectively tighten certified bounds; (2) achieve competitive certifiable robustness without certified robust training (i.e., over 30 complete verification to large adversarially trained models with 17M parameters. Codes are available at https://github.com/VITA-Group/Linearity-Grafting.

READ FULL TEXT

page 4

page 5

page 8

research
06/15/2022

Can pruning improve certified robustness of neural networks?

With the rapid development of deep learning, the sizes of neural network...
research
05/24/2023

Reconstructive Neuron Pruning for Backdoor Defense

Deep neural networks (DNNs) have been found to be vulnerable to backdoor...
research
04/01/2021

Towards Evaluating and Training Verifiably Robust Neural Networks

Recent works have shown that interval bound propagation (IBP) can be use...
research
05/28/2020

Exploiting Non-Linear Redundancy for Neural Model Compression

Deploying deep learning models, comprising of non-linear combination of ...
research
09/13/2023

FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization

Federated Learning (FL) has been successfully adopted for distributed tr...
research
02/10/2022

Mixture-of-Rookies: Saving DNN Computations by Predicting ReLU Outputs

Deep Neural Networks (DNNs) are widely used in many applications domains...
research
10/20/2022

Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

We find a surprising connection between multitask learning and robustnes...

Please sign up or login with your details

Forgot password? Click here to reset