From Discrete to Continuous Convolution Layers

06/19/2020
by   Assaf Shocher, et al.
0

A basic operation in Convolutional Neural Networks (CNNs) is spatial resizing of feature maps. This is done either by strided convolution (donwscaling) or transposed convolution (upscaling). Such operations are limited to a fixed filter moving at predetermined integer steps (strides). Spatial sizes of consecutive layers are related by integer scale factors, predetermined at architectural design, and remain fixed throughout training and inference time. We propose a generalization of the common Conv-layer, from a discrete layer to a Continuous Convolution (CC) Layer. CC Layers naturally extend Conv-layers by representing the filter as a learned continuous function over sub-pixel coordinates. This allows learnable and principled resizing of feature maps, to any size, dynamically and consistently across scales. Once trained, the CC layer can be used to output any scale/size chosen at inference time. The scale can be non-integer and differ between the axes. CC gives rise to new freedoms for architectural design, such as dynamic layer shapes at inference time, or gradual architectures where the size changes by a small factor at each layer. This gives rise to many desired CNN properties, new architectural design capabilities, and useful applications. We further show that current Conv-layers suffer from inherent misalignments, which are ameliorated by CC layers.

READ FULL TEXT

page 3

page 4

page 7

research
08/22/2016

Local Binary Convolutional Neural Networks

We propose local binary convolution (LBC), an efficient alternative to c...
research
12/10/2018

Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression

To compress deep convolutional neural networks (CNNs) with large memory ...
research
06/11/2019

BasisConv: A method for compressed representation and learning in CNNs

It is well known that Convolutional Neural Networks (CNNs) have signific...
research
06/04/2021

DISCO: accurate Discrete Scale Convolutions

Scale is often seen as a given, disturbing factor in many vision tasks. ...
research
03/25/2017

More is Less: A More Complicated Network with Less Inference Complexity

In this paper, we present a novel and general network structure towards ...
research
05/20/2021

Convolutional Block Design for Learned Fractional Downsampling

The layers of convolutional neural networks (CNNs) can be used to alter ...
research
10/05/2020

Mind the Pad – CNNs can Develop Blind Spots

We show how feature maps in convolutional networks are susceptible to sp...

Please sign up or login with your details

Forgot password? Click here to reset