Dynamic Sampling Convolutional Neural Networks

03/20/2018
by   Jialin Wu, et al.
0

We present Dynamic Sampling Convolutional Neural Networks (DSCNN), where the position-specific kernels learn from not only the current position but also multiple sampled neighbour regions. During sampling, residual learning is introduced to ease training and an attention mechanism is applied to fuse features from different samples. And the kernels are further factorized to reduce parameters. The multiple sampling strategy enlarges the effective receptive fields significantly without requiring more parameters. While DSCNNs inherit the advantages of DFN, namely avoiding feature map blurring by position-specific kernels while keeping translation invariance, it also efficiently alleviates the overfitting issue caused by much more parameters than normal CNNs. Our model is efficient and can be trained end-to-end via standard back-propagation. We demonstrate the merits of our DSCNNs on both sparse and dense prediction tasks involving object detection and flow estimation. Our results show that DSCNNs enjoy stronger recognition abilities and achieve 81.7 sharper responses in flow estimation on FlyingChairs dataset compared to multiple FlowNet models' baselines.

READ FULL TEXT

page 2

page 8

page 11

research
11/23/2022

Rega-Net:Retina Gabor Attention for Deep Convolutional Neural Networks

Extensive research works demonstrate that the attention mechanism in con...
research
05/23/2018

Use of symmetric kernels for convolutional neural networks

At this work we introduce horizontally symmetric convolutional kernels f...
research
04/06/2023

RFAConv: Innovating Spatital Attention and Standard Convolutional Operation

Spatial attention has been demonstrated to enable convolutional neural n...
research
11/30/2018

Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation

The advancement of deep convolutional neural networks (DCNNs) has driven...
research
06/01/2018

Targeted Kernel Networks: Faster Convolutions with Attentive Regularization

We propose Attentive Regularization (AR), a method to constrain the acti...
research
05/29/2019

Learning the Non-linearity in Convolutional Neural Networks

We propose the introduction of nonlinear operation into the feature gene...

Please sign up or login with your details

Forgot password? Click here to reset