Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay

11/20/2020
by   Hugo Tessier, et al.
0

During the last decade, deep convolutional networks have become the reference for many machine learning tasks, especially in computer vision. However, large computational needs make them hard to deploy on resource-constrained hardware. Pruning has emerged as a standard way to compress such large networks. Yet, the severe perturbation caused by most pruning approaches is thought to hinder their efficacy. Drawing inspiration from Lagrangian Smoothing, we introduce a new technique, Selective Weight Decay (SWD), which achieves continuous pruning throughout training. Our theoretically-grounded approach is versatile and can be applied to any problem, network or pruning structure. We show that SWD compares favorably to other approaches in terms of performance/parameters ratio on the CIFAR-10 and ImageNet ILSVRC2012 datasets. On CIFAR-10 and unstructured pruning, for a target rate of 0.1 accuracy of 81.32 and structured pruning, for a target rate of 2.5 reference technique drops at 10 accuracy at 93.22 pruning and the same target rate of 2.5 instead of the 77.07

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2021

Structured Model Pruning of Convolutional Networks on Tensor Processing Units

The deployment of convolutional neural networks is often hindered by hig...
research
11/29/2020

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

In order to deploy deep convolutional neural networks (CNNs) on resource...
research
08/28/2023

A Generalization of Continuous Relaxation in Structured Pruning

Deep learning harnesses massive parallel floating-point processing to tr...
research
01/21/2022

Adaptive Activation-based Structured Pruning

Pruning is a promising approach to compress complex deep learning models...
research
12/18/2020

A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks

Network pruning is a widely used technique to reduce computation cost an...
research
04/08/2023

Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning

Network pruning is a widely used technique to reduce computation cost an...
research
03/28/2022

Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks

Temporal Convolutional Networks (TCNs) are promising Deep Learning model...

Please sign up or login with your details

Forgot password? Click here to reset