Group channel pruning and spatial attention distilling for object detection

06/02/2023
by   Yun Chu, et al.
0

Due to the over-parameterization of neural networks, many model compression methods based on pruning and quantization have emerged. They are remarkable in reducing the size, parameter number, and computational complexity of the model. However, most of the models compressed by such methods need the support of special hardware and software, which increases the deployment cost. Moreover, these methods are mainly used in classification tasks, and rarely directly used in detection tasks. To address these issues, for the object detection network we introduce a three-stage model compression method: dynamic sparse training, group channel pruning, and spatial attention distilling. Firstly, to select out the unimportant channels in the network and maintain a good balance between sparsity and accuracy, we put forward a dynamic sparse training method, which introduces a variable sparse rate, and the sparse rate will change with the training process of the network. Secondly, to reduce the effect of pruning on network accuracy, we propose a novel pruning method called group channel pruning. In particular, we divide the network into multiple groups according to the scales of the feature layer and the similarity of module structure in the network, and then we use different pruning thresholds to prune the channels in each group. Finally, to recover the accuracy of the pruned network, we use an improved knowledge distillation method for the pruned network. Especially, we extract spatial attention information from the feature maps of specific scales in each group as knowledge for distillation. In the experiments, we use YOLOv4 as the object detection network and PASCAL VOC as the training dataset. Our method reduces the parameters of the model by 64.7 34.9

READ FULL TEXT
research
06/20/2019

GAN-Knowledge Distillation for one-stage Object Detection

Convolutional neural networks have a significant improvement in the accu...
research
01/15/2020

A "Network Pruning Network" Approach to Deep Model Compression

We present a filter pruning approach for deep model compression, using a...
research
11/06/2019

Localization-aware Channel Pruning for Object Detection

Channel pruning is one of the important methods for deep model compressi...
research
12/02/2020

An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution

We propose an efficient once-for-all budgeted pruning framework (OFARPru...
research
04/27/2022

Channel Pruned YOLOv5-based Deep Learning Approach for Rapid and Accurate Outdoor Obstacles Detection

One-stage algorithm have been widely used in target detection systems th...
research
05/07/2023

YOLOCS: Object Detection based on Dense Channel Compression for Feature Spatial Solidification

In this study, we examine the associations between channel features and ...
research
01/30/2022

Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning

With the remarkable success of deep learning recently, efficient network...

Please sign up or login with your details

Forgot password? Click here to reset