Modulating Regularization Frequency for Efficient Compression-Aware Model Training

05/05/2021
by   Dongsoo Lee, et al.
4

While model compression is increasingly important because of large neural network size, compression-aware training is challenging as it needs sophisticated model modifications and longer training time.In this paper, we introduce regularization frequency (i.e., how often compression is performed during training) as a new regularization technique for a practical and efficient compression-aware training method. For various regularization techniques, such as weight decay and dropout, optimizing the regularization strength is crucial to improve generalization in Deep Neural Networks (DNNs). While model compression also demands the right amount of regularization, the regularization strength incurred by model compression has been controlled only by compression ratio. Throughout various experiments, we show that regularization frequency critically affects the regularization strength of model compression. Combining regularization frequency and compression ratio, the amount of weight updates by model compression per mini-batch can be optimized to achieve the best model accuracy. Modulating regularization frequency is implemented by occasional model compression while conventional compression-aware training is usually performed for every mini-batch.

READ FULL TEXT

page 4

page 5

page 13

page 15

page 16

page 17

research
03/14/2023

R^2: Range Regularization for Model Compression and Quantization

Model parameter regularization is a widely used technique to improve gen...
research
09/26/2019

Convolutional Neural Networks with Dynamic Regularization

Regularization is commonly used in machine learning for alleviating over...
research
08/08/2023

Lossy and Lossless (L^2) Post-training Model Size Compression

Deep neural networks have delivered remarkable performance and have been...
research
12/20/2014

Neural Network Regularization via Robust Weight Factorization

Regularization is essential when training large neural networks. As deep...
research
09/03/2020

A Partial Regularization Method for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...
research
08/21/2023

Ultra Dual-Path Compression For Joint Echo Cancellation And Noise Suppression

Echo cancellation and noise reduction are essential for full-duplex comm...
research
05/08/2023

LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization

Regularization techniques are crucial to improving the generalization pe...

Please sign up or login with your details

Forgot password? Click here to reset