To prune, or not to prune: exploring the efficacy of pruning for model compression

10/05/2017
by   Michael Zhu, et al.
0

Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2021

Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm

Transformer-based pre-trained language models have significantly improve...
research
02/25/2019

The State of Sparsity in Deep Neural Networks

We rigorously evaluate three state-of-the-art techniques for inducing sp...
research
12/05/2018

DropPruning for Model Compression

Deep neural networks (DNNs) have dramatically achieved great success on ...
research
07/31/2020

Ultra-light deep MIR by trimming lottery tickets

Current state-of-the-art results in Music Information Retrieval are larg...
research
02/07/2020

Activation Density driven Energy-Efficient Pruning in Training

The process of neural network pruning with suitable fine-tuning and retr...
research
05/10/2020

Compact Neural Representation Using Attentive Network Pruning

Deep neural networks have evolved to become power demanding and conseque...
research
10/07/2019

Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent

Designing energy-efficient networks is of critical importance for enabli...

Please sign up or login with your details

Forgot password? Click here to reset