Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices

05/06/2019
by   Yiwu Yao, et al.
0

To achieve lightweight object detectors for deployment on the edge devices, an effective model compression pipeline is proposed in this paper. The compression pipeline consists of automatic channel pruning for the backbone, fixed channel deletion for the branch layers and knowledge distillation for the guidance learning. As results, the Resnet50-v1d is auto-pruned and fine-tuned on ImageNet to attain a compact base model as the backbone of object detector. Then, lightweight object detectors are implemented with proposed compression pipeline. For instance, the SSD-300 with model size=16.3MB, FLOPS=2.31G, and mAP=71.2 is created, revealing a better result than SSD-300-MobileNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2022

HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors

Conventional knowledge distillation (KD) methods for object detection ma...
research
06/20/2022

Knowledge Distillation for Oriented Object Detection on Aerial Images

Deep convolutional neural network with increased number of parameters ha...
research
08/17/2023

Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation

Resource-constrained perception systems such as edge computing and visio...
research
07/12/2023

YOGA: Deep Object Detection in the Wild with Lightweight Feature Learning and Multiscale Attention

We introduce YOGA, a deep learning based yet lightweight object detectio...
research
06/22/2023

Data-Free Backbone Fine-Tuning for Pruned Neural Networks

Model compression techniques reduce the computational load and memory co...
research
10/14/2022

Lightweight Alpha Matting Network Using Distillation-Based Channel Pruning

Recently, alpha matting has received a lot of attention because of its u...

Please sign up or login with your details

Forgot password? Click here to reset