On the Compression of Neural Networks Using ℓ_0-Norm Regularization and Weight Pruning

Despite the growing availability of high-capacity computational platforms, implementation complexity still has been a great concern for the real-world deployment of neural networks. This concern is not exclusively due to the huge costs of state-of-the-art network architectures, but also due to the recent push towards edge intelligence and the use of neural networks in embedded applications. In this context, network compression techniques have been gaining interest due to their ability for reducing deployment costs while keeping inference accuracy at satisfactory levels. The present paper is dedicated to the development of a novel compression scheme for neural networks. To this end, a new ℓ_0-norm-based regularization approach is firstly developed, which is capable of inducing strong sparseness in the network during training. Then, targeting the smaller weights of the trained network with pruning techniques, smaller yet highly effective networks can be obtained. The proposed compression scheme also involves the use of ℓ_2-norm regularization to avoid overfitting as well as fine tuning to improve the performance of the pruned network. Experimental results are presented aiming to show the effectiveness of the proposed scheme as well as to make comparisons with competing approaches.

READ FULL TEXT
research
09/19/2017

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

A low precision deep neural network training technique for producing spa...
research
11/23/2018

Structured Pruning of Neural Networks with Budget-Aware Regularization

Pruning methods have shown to be effective at reducing the size of deep ...
research
09/23/2020

Pruning Convolutional Filters using Batch Bridgeout

State-of-the-art computer vision models are rapidly increasing in capaci...
research
12/07/2020

The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

The pressing need to reduce the capacity of deep neural networks has sti...
research
03/21/2023

Fighting over-fitting with quantization for learning deep neural networks on noisy labels

The rising performance of deep neural networks is often empirically attr...
research
04/11/2022

Regularization-based Pruning of Irrelevant Weights in Deep Neural Architectures

Deep neural networks exploiting millions of parameters are nowadays the ...
research
10/14/2022

Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction

Compression of convolutional neural network models has recently been dom...

Please sign up or login with your details

Forgot password? Click here to reset