Compression-aware Training of Deep Networks

11/07/2017
by   Jose M. Alvarez, et al.
0

In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2023

Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training

Deep neural networks have achieved great success in many data processing...
research
01/20/2023

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Low-rank compression is an important model compression strategy for obta...
research
05/30/2022

STN: Scalable Tensorizing Networks via Structure-Aware Training and Adaptive Compression

Deep neural networks (DNNs) have delivered a remarkable performance in m...
research
02/20/2018

Do Deep Learning Models Have Too Many Parameters? An Information Theory Viewpoint

Deep learning models often have more parameters than observations, and s...
research
05/26/2023

Manifold Regularization for Memory-Efficient Training of Deep Neural Networks

One of the prevailing trends in the machine- and deep-learning community...
research
12/07/2017

AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training

Highly distributed training of Deep Neural Networks (DNNs) on future com...
research
09/04/2017

Domain-adaptive deep network compression

Deep Neural Networks trained on large datasets can be easily transferred...

Please sign up or login with your details

Forgot password? Click here to reset