Regularization-based Pruning of Irrelevant Weights in Deep Neural Architectures

04/11/2022
by   Giovanni Bonetta, et al.
0

Deep neural networks exploiting millions of parameters are nowadays the norm in deep learning applications. This is a potential issue because of the great amount of computational resources needed for training, and of the possible loss of generalization performance of overparametrized networks. We propose in this paper a method for learning sparse neural topologies via a regularization technique which identifies non relevant weights and selectively shrinks their norm, while performing a classic update for relevant ones. This technique, which is an improvement of classical weight decay, is based on the definition of a regularization term which can be added to any loss functional regardless of its form, resulting in a unified general framework exploitable in many different contexts. The actual elimination of parameters identified as irrelevant is handled by an iterative pruning algorithm. We tested the proposed technique on different image classification and Natural language generation tasks, obtaining results on par or better then competitors in terms of sparsity and metrics, while achieving strong models compression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2020

A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

To address the large model size and intensive computation requirement of...
research
06/11/2019

Simultaneously Learning Architectures and Features of Deep Neural Networks

This paper presents a novel method which simultaneously learns the numbe...
research
04/25/2018

Structured Deep Neural Network Pruning by Varying Regularization Parameters

Convolutional Neural Networks (CNN's) are restricted by their massive co...
research
06/28/2022

Deep Neural Networks pruning via the Structured Perspective Regularization

In Machine Learning, Artificial Neural Networks (ANNs) are a very powerf...
research
09/18/2021

Structured Pattern Pruning Using Regularization

Iterative Magnitude Pruning (IMP) is a network pruning method that repea...
research
08/27/2018

Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant

Deep learning is finding its way into the embedded world with applicatio...
research
09/10/2021

On the Compression of Neural Networks Using ℓ_0-Norm Regularization and Weight Pruning

Despite the growing availability of high-capacity computational platform...

Please sign up or login with your details

Forgot password? Click here to reset