Automated Model Compression by Jointly Applied Pruning and Quantization

11/12/2020
by   Wenting Tang, et al.
0

In the traditional deep compression framework, iteratively performing network pruning and quantization can reduce the model size and computation cost to meet the deployment requirements. However, such a step-wise application of pruning and quantization may lead to suboptimal solutions and unnecessary time consumption. In this paper, we tackle this issue by integrating network pruning and quantization as a unified joint compression problem and then use AutoML to automatically solve it. We find the pruning process can be regarded as the channel-wise quantization with 0 bit. Thus, the separate two-step pruning and quantization can be simplified as the one-step quantization with mixed precision. This unification not only simplifies the compression pipeline but also avoids the compression divergence. To implement this idea, we propose the automated model compression by jointly applied pruning and quantization (AJPQ). AJPQ is designed with a hierarchical architecture: the layer controller controls the layer sparsity, and the channel controller decides the bit-width for each kernel. Following the same importance criterion, the layer controller and the channel controller collaboratively decide the compression strategy. With the help of reinforcement learning, our one-step compression is automatically achieved. Compared with the state-of-the-art automated compression methods, our method obtains a better accuracy while reducing the storage considerably. For fixed precision quantization, AJPQ can reduce more than five times model size and two times computation with a slight performance increase for Skynet in remote sensing object detection. When mixed-precision is allowed, AJPQ can reduce five times model size with only 1.06 decline for MobileNet in the classification task.

READ FULL TEXT

page 6

page 7

research
05/23/2022

OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization

As Deep Neural Networks (DNNs) usually are overparameterized and have mi...
research
12/04/2019

Deep Model Compression via Deep Reinforcement Learning

Besides accuracy, the storage of convolutional neural networks (CNN) mod...
research
11/18/2020

Layer-Wise Data-Free CNN Compression

We present an efficient method for compressing a trained neural network ...
research
03/11/2020

Kernel Quantization for Efficient Network Compression

This paper presents a novel network compression framework Kernel Quantiz...
research
01/13/2021

ABS: Automatic Bit Sharing for Model Compression

We present Automatic Bit Sharing (ABS) to automatically search for optim...
research
08/22/2020

One Weight Bitwidth to Rule Them All

Weight quantization for deep ConvNets has shown promising results for ap...
research
02/15/2023

Towards Optimal Compression: Joint Pruning and Quantization

Compression of deep neural networks has become a necessary stage for opt...

Please sign up or login with your details

Forgot password? Click here to reset