Bag of Tricks with Quantized Convolutional Neural Networks for image classification

03/13/2023
by   Jie Hu, et al.
0

Deep neural networks have been proven effective in a wide range of tasks. However, their high computational and memory costs make them impractical to deploy on resource-constrained devices. To address this issue, quantization schemes have been proposed to reduce the memory footprint and improve inference speed. While numerous quantization methods have been proposed, they lack systematic analysis for their effectiveness. To bridge this gap, we collect and improve existing quantization methods and propose a gold guideline for post-training quantization. We evaluate the effectiveness of our proposed method with two popular models, ResNet50 and MobileNetV2, on the ImageNet dataset. By following our guidelines, no accuracy degradation occurs even after directly quantizing the model to 8-bits without additional training. A quantization-aware training based on the guidelines can further improve the accuracy in lower-bits quantization. Moreover, we have integrated a multi-stage fine-tuning strategy that works harmoniously with existing pruning techniques to reduce costs even further. Remarkably, our results reveal that a quantized MobileNetV2 with 30% sparsity actually surpasses the performance of the equivalent full-precision model, underscoring the effectiveness and resilience of our proposed scheme.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2018

Retraining-Based Iterative Weight Quantization for Deep Neural Networks

Model compression has gained a lot of attention due to its ability to re...
research
01/09/2020

Least squares binary quantization of neural networks

Quantizing weights and activations of deep neural networks results in si...
research
10/02/2019

Quantized Reinforcement Learning (QUARL)

Recent work has shown that quantization can help reduce the memory, comp...
research
12/03/2022

Make RepVGG Greater Again: A Quantization-aware Approach

The tradeoff between performance and inference speed is critical for pra...
research
02/10/2022

Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment

To adopt convolutional neural networks (CNN) for a range of resource-con...
research
10/05/2020

Joint Pruning Quantization for Extremely Sparse Neural Networks

We investigate pruning and quantization for deep neural networks. Our go...
research
05/15/2023

Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks

This work examines the challenges of training neural networks using vect...

Please sign up or login with your details

Forgot password? Click here to reset