Learning Low-Rank Representations for Model Compression

11/21/2022
by   Zezhou Zhu, et al.
0

Vector Quantization (VQ) is an appealing model compression method to obtain a tiny model with less accuracy loss. While methods to obtain better codebooks and codes under fixed clustering dimensionality have been extensively studied, optimizations of the vectors in favour of clustering performance are not carefully considered, especially via the reduction of vector dimensionality. This paper reports our recent progress on the combination of dimensionality compression and vector quantization, proposing a Low-Rank Representation Vector Quantization (LR^2VQ) method that outperforms previous VQ algorithms in various tasks and architectures. LR^2VQ joins low-rank representation with subvector clustering to construct a new kind of building block that is directly optimized through end-to-end training over the task loss. Our proposed design pattern introduces three hyper-parameters, the number of clusters k, the size of subvectors m and the clustering dimensionality d̃. In our method, the compression ratio could be directly controlled by m, and the final accuracy is solely determined by d̃. We recognize d̃ as a trade-off between low-rank approximation error and clustering error and carry out both theoretical analysis and experimental observations that empower the estimation of the proper d̃ before fine-tunning. With a proper d̃, we evaluate LR^2VQ with ResNet-18/ResNet-50 on ImageNet classification datasets, achieving 2.8%/1.0% top-1 accuracy improvements over the current state-of-the-art VQ-based compression algorithms with 43×/31× compression factor.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2018

DeepTwist: Learning Model Compression via Occasional Weight Distortion

Model compression has been introduced to reduce the required hardware re...
research
07/30/2022

Distilled Low Rank Neural Radiance Field with Quantization for Light Field Compression

In this paper, we propose a novel light field compression method based o...
research
11/30/2021

A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank

Compression has emerged as one of the essential deep learning research t...
research
04/19/2018

Low Rank Structure of Learned Representations

A key feature of neural networks, particularly deep convolutional neural...
research
06/09/2023

End-to-End Neural Network Compression via ℓ_1/ℓ_2 Regularized Latency Surrogates

Neural network (NN) compression via techniques such as pruning, quantiza...
research
03/12/2019

Cascaded Projection: End-to-End Network Compression and Acceleration

We propose a data-driven approach for deep convolutional neural network ...
research
07/09/2021

Model compression as constrained optimization, with application to neural nets. Part V: combining compressions

Model compression is generally performed by using quantization, low-rank...

Please sign up or login with your details

Forgot password? Click here to reset