Modulating Regularization Frequency for Efficient Compression-Aware Model Training

05/05/2021
by   Dongsoo Lee, et al.
4

While model compression is increasingly important because of large neural network size, compression-aware training is challenging as it needs sophisticated model modifications and longer training time.In this paper, we introduce regularization frequency (i.e., how often compression is performed during training) as a new regularization technique for a practical and efficient compression-aware training method. For various regularization techniques, such as weight decay and dropout, optimizing the regularization strength is crucial to improve generalization in Deep Neural Networks (DNNs). While model compression also demands the right amount of regularization, the regularization strength incurred by model compression has been controlled only by compression ratio. Throughout various experiments, we show that regularization frequency critically affects the regularization strength of model compression. Combining regularization frequency and compression ratio, the amount of weight updates by model compression per mini-batch can be optimized to achieve the best model accuracy. Modulating regularization frequency is implemented by occasional model compression while conventional compression-aware training is usually performed for every mini-batch.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 13

page 15

page 16

page 17

09/26/2019

Convolutional Neural Networks with Dynamic Regularization

Regularization is commonly used in machine learning for alleviating over...
03/25/2020

Volumization as a Natural Generalization of Weight Decay

We propose a novel regularization method, called volumization, for neura...
12/20/2014

Neural Network Regularization via Robust Weight Factorization

Regularization is essential when training large neural networks. As deep...
07/08/2018

Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure

Model-based compression is an effective, facilitating, and expanded mode...
09/03/2020

A Partial Regularization Method for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...
12/11/2019

An Improving Framework of regularization for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...
01/05/2020

Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters

In this paper, we investigate the empirical impact of orthogonality regu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.