The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

12/07/2020
by   Yael Ben-Guigui, et al.
4

The pressing need to reduce the capacity of deep neural networks has stimulated the development of network dilution methods and their analysis. While the ability of L_1 and L_0 regularization to encourage sparsity is often mentioned, L_2 regularization is seldom discussed in this context. We present a novel framework for weight pruning by sampling from a probability function that favors the zeroing of smaller weights. In addition, we examine the contribution of L_1 and L_2 regularization to the dynamics of node pruning while optimizing for weight pruning. We then demonstrate the effectiveness of the proposed stochastic framework when used together with a weight decay regularizer on popular classification models in removing 50 the nodes in an MLP for MNIST classification, 60 CIFAR10 classification, and on medical image models in removing 60 channels in a U-Net for instance segmentation and 50 model for COVID-19 detection. For these node-pruned networks, we also present competitive weight pruning results that are only slightly less accurate than the original, dense networks.

READ FULL TEXT

page 4

page 8

page 9

page 11

research
05/04/2018

Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks

Artificial neural networks (ANNs) may not be worth their computational/m...
research
06/15/2022

Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin Eosin-Stained Histological Images

Recently, pruning deep neural networks (DNNs) has received a lot of atte...
research
06/17/2021

Pruning Randomly Initialized Neural Networks with Iterative Randomization

Pruning the weights of randomly initialized neural networks plays an imp...
research
08/19/2023

To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

Reducing the size of a neural network (pruning) by removing weights with...
research
09/10/2021

On the Compression of Neural Networks Using ℓ_0-Norm Regularization and Weight Pruning

Despite the growing availability of high-capacity computational platform...
research
11/02/2022

SIMD-size aware weight regularization for fast neural vocoding on CPU

This paper proposes weight regularization for a faster neural vocoder. P...
research
03/09/2022

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

Neural networks tend to achieve better accuracy with training if they ar...

Please sign up or login with your details

Forgot password? Click here to reset