Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks

03/05/2020
by   Sourjya Roy, et al.
23

Modern deep networks have millions to billions of parameters, which leads to high memory and energy requirements during training as well as during inference on resource-constrained edge devices. Consequently, pruning techniques have been proposed that remove less significant weights in deep networks, thereby reducing their memory and computational requirements. Pruning is usually performed after training the original network, and is followed by further retraining to compensate for the accuracy loss incurred during pruning. The prune-and-retrain procedure is repeated iteratively until an optimum tradeoff between accuracy and efficiency is reached. However, such iterative retraining adds to the overall training complexity of the network. In this work, we propose a dynamic pruning-while-training procedure, wherein we prune filters of the convolutional layers of a deep network during training itself, thereby precluding the need for separate retraining. We evaluate our dynamic pruning-while-training approach with three different pre-existing pruning strategies, viz. mean activation-based pruning, random pruning, and L1 normalization-based pruning. Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1 the filters compared to the original network. We further evaluated the L1 normalization based pruning mechanism on CIFAR100. Results indicate that pruning while training yields a compressed network with almost no accuracy loss after pruning 50 for high pruning rates (>80 reduction in the number of computations and memory accesses during training for CIFAR10, CIFAR100 and ImageNet compared to training with retraining for 10 epochs .

READ FULL TEXT
research
07/14/2022

DropNet: Reducing Neural Network Complexity via Iterative Pruning

Modern deep neural networks require a significant amount of computing ti...
research
03/10/2021

Manifold Regularized Dynamic Network Pruning

Neural network pruning is an essential approach for reducing the computa...
research
11/17/2020

Dynamic Hard Pruning of Neural Networks at the Edge of the Internet

Neural Networks (NN), although successfully applied to several Artificia...
research
07/08/2022

Pruning Early Exit Networks

Deep learning models that perform well often have high computational cos...
research
08/31/2020

Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach

Overparameterization and overfitting are common concerns when designing ...
research
12/26/2018

Studying the Plasticity in Deep Convolutional Neural Networks using Random Pruning

Recently there has been a lot of work on pruning filters from deep convo...
research
05/30/2023

Rank-adaptive spectral pruning of convolutional layers during training

The computing cost and memory demand of deep learning pipelines have gro...

Please sign up or login with your details

Forgot password? Click here to reset