Rank-adaptive spectral pruning of convolutional layers during training

05/30/2023
by   Emanuele Zangrando, et al.
0

The computing cost and memory demand of deep learning pipelines have grown fast in recent years and thus a variety of pruning techniques have been developed to reduce model parameters. The majority of these techniques focus on reducing inference costs by pruning the network after a pass of full training. A smaller number of methods address the reduction of training costs, mostly based on compressing the network via low-rank layer factorizations. Despite their efficiency for linear layers, these methods fail to effectively handle convolutional filters. In this work, we propose a low-parametric training method that factorizes the convolutions into tensor Tucker format and adaptively prunes the Tucker ranks of the convolutional kernel during training. Leveraging fundamental results from geometric integration theory of differential equations on tensor manifolds, we obtain a robust training algorithm that provably approximates the full baseline performance and guarantees loss descent. A variety of experiments against the full model and alternative low-rank baselines are implemented, showing that the proposed method drastically reduces the training costs, while achieving high performance, comparable to or better than the full baseline, and consistently outperforms competing low-rank approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

Robust low-rank training via approximate orthonormal constraints

With the growth of model and data sizes, a broad effort has been made to...
research
05/26/2022

Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations

Neural networks have achieved tremendous success in a large variety of a...
research
03/05/2020

Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks

Modern deep networks have millions to billions of parameters, which lead...
research
07/11/2023

Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

Despite the dominance and effectiveness of scaling, resulting in large n...
research
05/15/2014

Speeding up Convolutional Neural Networks with Low Rank Expansions

The focus of this paper is speeding up the evaluation of convolutional n...
research
08/13/2021

FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

To overcome the burdens on frequent model uploads and downloads during f...
research
05/24/2022

Compression-aware Training of Neural Networks using Frank-Wolfe

Many existing Neural Network pruning approaches either rely on retrainin...

Please sign up or login with your details

Forgot password? Click here to reset