SparseVLR: A Novel Framework for Verified Locally Robust Sparse Neural Networks Search

11/17/2022
by   Sawinder Kaur, et al.
0

The compute-intensive nature of neural networks (NNs) limits their deployment in resource-constrained environments such as cell phones, drones, autonomous robots, etc. Hence, developing robust sparse models fit for safety-critical applications has been an issue of longstanding interest. Though adversarial training with model sparsification has been combined to attain the goal, conventional adversarial training approaches provide no formal guarantee that the models would be robust against any rogue samples in a restricted space around a benign sample. Recently proposed verified local robustness techniques provide such a guarantee. This is the first paper that combines the ideas from verified local robustness and dynamic sparse training to develop `SparseVLR'– a novel framework to search verified locally robust sparse networks. Obtained sparse models exhibit accuracy and robustness comparable to their dense counterparts at sparsity as high as 99 sparsification techniques, SparseVLR does not require a pre-trained dense model, reducing the training time by 50 SparseVLR's efficacy and generalizability by evaluating various benchmark and application-specific datasets across several models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset