To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

08/19/2023
by   Rajan Sahu, et al.
0

Reducing the size of a neural network (pruning) by removing weights without impacting its performance is an important problem for resource-constrained devices. In the past, pruning was typically accomplished by ranking or penalizing weights based on criteria like magnitude and removing low-ranked weights before retraining the remaining ones. Pruning strategies may also involve removing neurons from the network in order to achieve the desired reduction in network size. We formulate pruning as an optimization problem with the objective of minimizing misclassifications by selecting specific weights. To accomplish this, we have introduced the concept of chaos in learning (Lyapunov exponents) via weight updates and exploiting causality to identify the causal weights responsible for misclassification. Such a pruned network maintains the original performance and retains feature explainability.

READ FULL TEXT

page 12

page 13

research
02/02/2022

Cyclical Pruning for Sparse Neural Networks

Current methods for pruning neural network weights iteratively apply mag...
research
03/09/2022

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

Neural networks tend to achieve better accuracy with training if they ar...
research
05/15/2019

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

Reducing the test time resource requirements of a neural network while p...
research
12/07/2020

The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

The pressing need to reduce the capacity of deep neural networks has sti...
research
02/17/2022

When, where, and how to add new neurons to ANNs

Neurogenesis in ANNs is an understudied and difficult problem, even comp...
research
09/27/2022

Neural Network Panning: Screening the Optimal Sparse Network Before Training

Pruning on neural networks before training not only compresses the origi...
research
01/31/2022

Signing the Supermask: Keep, Hide, Invert

The exponential growth in numbers of parameters of neural networks over ...

Please sign up or login with your details

Forgot password? Click here to reset