Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

06/22/2020
by   Qiulin Zhang, et al.
0

Many effective solutions have been proposed to reduce the redundancy of models for inference acceleration. Nevertheless, common approaches mostly focus on eliminating less important filters or constructing efficient operations, while ignoring the pattern redundancy in feature maps. We reveal that many feature maps within a layer share similar but not identical patterns. However, it is difficult to identify if features with similar patterns are redundant or contain essential details. Therefore, instead of directly removing uncertain redundant features, we propose a split based convolutional operation, namely SPConv, to tolerate features with similar patterns but require less computation. Specifically, we split input feature maps into the representative part and the uncertain redundant part, where intrinsic information is extracted from the representative part through relatively heavy computation while tiny hidden details in the uncertain redundant part are processed with some light-weight operation. To recalibrate and fuse these two groups of processed features, we propose a parameters-free feature fusion module. Moreover, our SPConv is formulated to replace the vanilla convolution in a plug-and-play way. Without any bells and whistles, experimental results on benchmarks demonstrate SPConv-equipped networks consistently outperform state-of-the-art baselines in both accuracy and inference time on GPU, with FLOPs and parameters dropped sharply.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
01/10/2022

GhostNets on Heterogeneous Devices via Cheap Operations

Deploying convolutional neural networks (CNNs) on mobile devices is diff...
research
04/10/2019

Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

In natural images, information is conveyed at different frequencies wher...
research
03/16/2020

SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping

The channel redundancy in feature maps of convolutional neural networks ...
research
02/21/2018

Building Efficient ConvNets using Redundant Feature Pruning

This paper presents an efficient technique to prune deep and/or wide con...
research
01/21/2021

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

Modern single image super-resolution (SISR) system based on convolutiona...
research
09/26/2020

DT-Net: A novel network based on multi-directional integrated convolution and threshold convolution

Since medical image data sets contain few samples and singular features,...
research
12/27/2019

MoEVC: A Mixture-of-experts Voice Conversion System with Sparse Gating Mechanism for Accelerating Online Computation

With the recent advancements of deep learning technologies, the performa...

Please sign up or login with your details

Forgot password? Click here to reset