DeepAI AI Chat
Log In Sign Up

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

by   Xin Yu, et al.

Neural networks tend to achieve better accuracy with training if they are larger – even if the resulting models are overparameterized. Nevertheless, carefully removing such excess parameters before, during, or after training may also produce models with similar or even improved accuracy. In many cases, that can be curiously achieved by heuristics as simple as removing a percentage of the weights with the smallest absolute value – even though magnitude is not a perfect proxy for weight relevance. With the premise that obtaining significantly better performance from pruning depends on accounting for the combined effect of removing multiple weights, we revisit one of the classic approaches for impact-based pruning: the Optimal Brain Surgeon (OBS). We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights. Our selection method outperforms other methods under high sparsity, and the weight update is advantageous even when combined with the other methods.


page 1

page 2

page 3

page 4


Fast as CHITA: Neural Network Pruning with Combinatorial Optimization

The sheer size of modern neural networks makes model serving a serious c...

Cyclical Pruning for Sparse Neural Networks

Current methods for pruning neural network weights iteratively apply mag...

Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning

We report, for the first time, on the cascade weight shedding phenomenon...

The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

The pressing need to reduce the capacity of deep neural networks has sti...

SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance

The leap in performance in state-of-the-art computer vision methods is a...

Selective Brain Damage: Measuring the Disparate Impact of Model Pruning

Neural network pruning techniques have demonstrated it is possible to re...

Synthesis and Pruning as a Dynamic Compression Strategy for Efficient Deep Neural Networks

The brain is a highly reconfigurable machine capable of task-specific ad...