PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer

07/13/2020
by   Duo Li, et al.
0

Despite their strong modeling capacities, Convolutional Neural Networks (CNNs) are often scale-sensitive. For enhancing the robustness of CNNs to scale variance, multi-scale feature fusion from different layers or filters attracts great attention among existing solutions, while the more granular kernel space is overlooked. We bridge this regret by exploiting multi-scale features in a finer granularity. The proposed convolution operation, named Poly-Scale Convolution (PSConv), mixes up a spectrum of dilation rates and tactfully allocate them in the individual convolutional kernels of each filter regarding a single convolutional layer. Specifically, dilation rates vary cyclically along the axes of input and output channels of the filters, aggregating features over a wide range of scales in a neat style. PSConv could be a drop-in replacement of the vanilla convolution in many prevailing CNN backbones, allowing better representation learning without introducing additional parameters and computational complexities. Comprehensive experiments on the ImageNet and MS COCO benchmarks validate the superior performance of PSConv. Code and models are available at https://github.com/d-li14/PSConv.

READ FULL TEXT

page 16

page 18

page 20

page 21

page 22

page 23

page 24

page 25

research
09/16/2022

Omni-Dimensional Dynamic Convolution

Learning a single static convolutional kernel in each convolutional laye...
research
02/28/2019

Bi-Directional Cascade Network for Perceptual Edge Detection

Exploiting multi-scale representations is critical to improve edge detec...
research
04/20/2019

Data-Driven Neuron Allocation for Scale Aggregation Networks

Successful visual recognition networks benefit from aggregating informat...
research
04/05/2023

SMPConv: Self-moving Point Representations for Continuous Convolution

Continuous convolution has recently gained prominence due to its ability...
research
12/19/2019

Scale-wise Convolution for Image Restoration

While scale-invariant modeling has substantially boosted the performance...
research
10/29/2021

Gabor filter incorporated CNN for compression

Convolutional neural networks (CNNs) are remarkably successful in many c...
research
09/19/2022

Scale Attention for Learning Deep Face Representation: A Study Against Visual Scale Variation

Human face images usually appear with wide range of visual scales. The e...

Please sign up or login with your details

Forgot password? Click here to reset