Quantization Mimic: Towards Very Tiny CNN for Object Detection

05/06/2018
by   Yi Wei, et al.
0

In this paper, we propose a simple and general framework for training very tiny CNNs for object detection. Due to limited representation ability, it is challenging to train very tiny networks for complicated tasks like detection. To the best of our knowledge, our method, called Quantization Mimic, is the first one focusing on very tiny networks. We utilize two types of acceleration methods: mimic and quantization. Mimic improves the performance of a student network by transfering knowledge from a teacher network. Quantization converts a full-precision network to a quantized one without large degradation of performance. If the teacher network is quantized, the search scope of the student network will be smaller. Using this property of quantization, we propose Quantization Mimic. It first quantizes the large network, then mimic a quantized small network. We suggest the operation of quantization can help student network to match the feature maps from teacher network. To evaluate the generalization of our hypothesis, we carry out experiments on various popular CNNs including VGG and Resnet, as well as different detection frameworks including Faster R-CNN and R-FCN. Experiments on Pascal VOC and WIDER FACE verify our Quantization Mimic algorithm can be applied on various settings and outperforms state-of-the-art model acceleration methods given limited computing resouces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2023

Quantized Feature Distillation for Network Quantization

Neural network quantization aims to accelerate and trim full-precision n...
research
12/03/2018

Knowledge Distillation with Feature Maps for Image Classification

The model reduction problem that eases the computation costs and latency...
research
12/05/2018

Feature Matters: A Stage-by-Stage Approach for Knowledge Transfer

Convolutional Neural Networks (CNNs) become deeper and deeper in recent ...
research
11/28/2019

QKD: Quantization-aware Knowledge Distillation

Quantization and Knowledge distillation (KD) methods are widely used to ...
research
11/13/2019

DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection

Deploying deep learning based face detectors on edge devices is a challe...
research
06/14/2019

Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks

The deep layers of modern neural networks extract a rather rich set of f...
research
09/13/2022

PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers

Data-free quantization can potentially address data privacy and security...

Please sign up or login with your details

Forgot password? Click here to reset