Position, Padding and Predictions: A Deeper Look at Position Information in CNNs

by   Md Amirul Islam, et al.

In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. In this paper, we first test this hypothesis and reveal that a surprising degree of absolute position information is encoded in commonly used CNNs. We show that zero padding drives CNNs to encode position information in their internal representations, while a lack of padding precludes position encoding. This gives rise to deeper questions about the role of position information in CNNs: (i) What boundary heuristics enable optimal position encoding for downstream tasks?; (ii) Does position encoding affect the learning of semantic representations?; (iii) Does position encoding always improve performance? To provide answers, we perform the largest case study to date on the role that padding and border heuristics play in CNNs. We design novel tasks which allow us to quantify boundary effects as a function of the distance to the border. Numerous semantic objectives reveal the effect of the border on semantic representations. Finally, we demonstrate the implications of these findings on multiple real-world tasks to show that position information can both help or hurt performance.


page 1

page 2

page 6

page 11

page 12

page 13

page 18

page 19


How Much Position Information Do Convolutional Neural Networks Encode?

In contrast to fully connected networks, Convolutional Neural Networks (...

How Can CNNs Use Image Position for Segmentation?

Convolution is an equivariant operation, and image position does not aff...

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs

In this paper, we challenge the common assumption that collapsing the sp...

The Curious Case of Absolute Position Embeddings

Transformer language models encode the notion of word order using positi...

Understanding Deep Image Representations by Inverting Them

Image representations, from SIFT and Bag of Visual Words to Convolutiona...

On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location

In this paper we challenge the common assumption that convolutional laye...

Relative Position Prediction as Pre-training for Text Encoders

Meaning is defined by the company it keeps. However, company is two-fold...

Please sign up or login with your details

Forgot password? Click here to reset