DeepAI
Log In Sign Up

Position, Padding and Predictions: A Deeper Look at Position Information in CNNs

01/28/2021
by   Md Amirul Islam, et al.
39

In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. In this paper, we first test this hypothesis and reveal that a surprising degree of absolute position information is encoded in commonly used CNNs. We show that zero padding drives CNNs to encode position information in their internal representations, while a lack of padding precludes position encoding. This gives rise to deeper questions about the role of position information in CNNs: (i) What boundary heuristics enable optimal position encoding for downstream tasks?; (ii) Does position encoding affect the learning of semantic representations?; (iii) Does position encoding always improve performance? To provide answers, we perform the largest case study to date on the role that padding and border heuristics play in CNNs. We design novel tasks which allow us to quantify boundary effects as a function of the distance to the border. Numerous semantic objectives reveal the effect of the border on semantic representations. Finally, we demonstrate the implications of these findings on multiple real-world tasks to show that position information can both help or hurt performance.

READ FULL TEXT

page 1

page 2

page 6

page 11

page 12

page 13

page 18

page 19

01/22/2020

How Much Position Information Do Convolutional Neural Networks Encode?

In contrast to fully connected networks, Convolutional Neural Networks (...
05/07/2020

How Can CNNs Use Image Position for Segmentation?

Convolution is an equivariant operation, and image position does not aff...
08/17/2021

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs

In this paper, we challenge the common assumption that collapsing the sp...
10/23/2022

The Curious Case of Absolute Position Embeddings

Transformer language models encode the notion of word order using positi...
11/26/2014

Understanding Deep Image Representations by Inverting Them

Image representations, from SIFT and Bag of Visual Words to Convolutiona...
02/02/2022

Relative Position Prediction as Pre-training for Text Encoders

Meaning is defined by the company it keeps. However, company is two-fold...
03/13/2020

Learning to Encode Position for Transformer with Continuous Dynamical Model

We introduce a new way of learning to encode position information for no...