KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks

01/17/2021
by   Po-Hsiang Yu, et al.
17

Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobile devices. Pruning for dense labeling applications is still a largely unexplored field. The prevailing filter channel pruning method removes the entire filter channel. Accordingly, the interaction between each kernel in one filter channel is ignored. In this study, we proposed kernel cluster pruning (KCP) to prune dense labeling networks. We developed a clustering technique to identify the least representational kernels in each layer. By iteratively removing those kernels, the parameter that can better represent the entire network is preserved; thus, we achieve better accuracy with a decent model size and computation reduction. When evaluated on stereo matching and semantic segmentation neural networks, our method can reduce more than 70 drop. Moreover, for ResNet-50 on ILSVRC-2012, our KCP can reduce more than 50 of FLOPs reduction with 0.13 state-of-the-art pruning results.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 13

page 14

page 15

page 16

research
07/16/2020

Multi-Task Pruning for Semantic Segmentation Networks

This paper focuses on channel pruning for semantic segmentation networks...
research
08/08/2019

Efficient Inference of CNNs via Channel Pruning

The deployment of Convolutional Neural Networks (CNNs) on resource const...
research
09/06/2018

2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy

Deep Convolutional Neural Networks (CNNs) offer remarkable performance o...
research
09/16/2021

Dense Pruning of Pointwise Convolutions in the Frequency Domain

Depthwise separable convolutions and frequency-domain convolutions are t...
research
10/01/2018

Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

Resource-efficient convolution neural networks enable not only the intel...
research
06/29/2019

Dissecting Pruned Neural Networks

Pruning is a standard technique for removing unnecessary structure from ...
research
07/19/2017

Channel Pruning for Accelerating Very Deep Neural Networks

In this paper, we introduce a new channel pruning method to accelerate v...

Please sign up or login with your details

Forgot password? Click here to reset