Deep Neural Networks pruning via the Structured Perspective Regularization

06/28/2022
by   Matteo Cacciola, et al.
0

In Machine Learning, Artificial Neural Networks (ANNs) are a very powerful tool, broadly used in many applications. Often, the selected (deep) architectures include many layers, and therefore a large amount of parameters, which makes training, storage and inference expensive. This motivated a stream of research about compressing the original networks into smaller ones without excessively sacrificing performances. Among the many proposed compression approaches, one of the most popular is pruning, whereby entire elements of the ANN (links, nodes, channels, …) and the corresponding weights are deleted. Since the nature of the problem is inherently combinatorial (what elements to prune and what not), we propose a new pruning method based on Operational Research tools. We start from a natural Mixed-Integer-Programming model for the problem, and we use the Perspective Reformulation technique to strengthen its continuous relaxation. Projecting away the indicator variables from this reformulation yields a new regularization term, which we call the Structured Perspective Regularization, that leads to structured pruning of the initial architecture. We test our method on some ResNet architectures applied to CIFAR-10, CIFAR-100 and ImageNet datasets, obtaining competitive performances w.r.t. the state of the art for structured pruning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks

Channel pruning can significantly accelerate and compress deep neural ne...
research
05/04/2018

Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks

Artificial neural networks (ANNs) may not be worth their computational/m...
research
07/14/2023

Structured Pruning of Neural Networks for Constraints Learning

In recent years, the integration of Machine Learning (ML) models with Op...
research
10/23/2022

Pushing the Efficiency Limit Using Structured Sparse Convolutions

Weight pruning is among the most popular approaches for compressing deep...
research
04/11/2022

Regularization-based Pruning of Irrelevant Weights in Deep Neural Architectures

Deep neural networks exploiting millions of parameters are nowadays the ...
research
07/12/2021

Structured Directional Pruning via Perturbation Orthogonal Projection

Structured pruning is an effective compression technique to reduce the c...
research
12/16/2020

Neural Pruning via Growing Regularization

Regularization has long been utilized to learn sparsity in deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset