Adaptive Sharpness-Aware Pruning for Robust Sparse Networks

06/25/2023
by   Anna Bair, et al.
0

Robustness and compactness are two essential components of deep learning models that are deployed in the real world. The seemingly conflicting aims of (i) generalization across domains as in robustness, and (ii) specificity to one domain as in compression, are why the overall design goal of achieving robust compact models, despite being highly important, is still a challenging open problem. We introduce Adaptive Sharpness-Aware Pruning, or AdaSAP, a method that yields robust sparse networks. The central tenet of our approach is to optimize the loss landscape so that the model is primed for pruning via adaptive weight perturbation, and is also consistently regularized toward flatter regions for improved robustness. This unifies both goals through the lens of network sharpness. AdaSAP achieves strong performance in a comprehensive set of experiments. For classification on ImageNet and object detection on Pascal VOC datasets, AdaSAP improves the robust accuracy of pruned models by +6 datasets, over a wide range of compression ratios, saliency criteria, and network architectures, outperforming recent pruning art by large margins.

READ FULL TEXT

page 7

page 14

research
10/20/2019

Self-Adaptive Network Pruning

Deep convolutional neural networks have been proved successful on a wide...
research
03/19/2021

Toward Compact Deep Neural Networks via Energy-Aware Pruning

Despite of the remarkable performance, modern deep neural networks are i...
research
05/24/2022

Compression-aware Training of Neural Networks using Frank-Wolfe

Many existing Neural Network pruning approaches either rely on retrainin...
research
06/04/2020

Weight Pruning via Adaptive Sparsity Loss

Pruning neural networks has regained interest in recent years as a means...
research
07/28/2022

CrAM: A Compression-Aware Minimizer

We examine the question of whether SGD-based optimization of deep neural...
research
05/25/2022

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

Model compression by way of parameter pruning, quantization, or distilla...
research
03/02/2023

Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection

Detecting Out-of-distribution (OOD) inputs have been a critical issue fo...

Please sign up or login with your details

Forgot password? Click here to reset