Position, Padding and Predictions: A Deeper Look at Position Information in CNNs

by   Md Amirul Islam, et al.

In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. In this paper, we first test this hypothesis and reveal that a surprising degree of absolute position information is encoded in commonly used CNNs. We show that zero padding drives CNNs to encode position information in their internal representations, while a lack of padding precludes position encoding. This gives rise to deeper questions about the role of position information in CNNs: (i) What boundary heuristics enable optimal position encoding for downstream tasks?; (ii) Does position encoding affect the learning of semantic representations?; (iii) Does position encoding always improve performance? To provide answers, we perform the largest case study to date on the role that padding and border heuristics play in CNNs. We design novel tasks which allow us to quantify boundary effects as a function of the distance to the border. Numerous semantic objectives reveal the effect of the border on semantic representations. Finally, we demonstrate the implications of these findings on multiple real-world tasks to show that position information can both help or hurt performance.



There are no comments yet.


page 1

page 2

page 6

page 11

page 12

page 13

page 18

page 19


How Much Position Information Do Convolutional Neural Networks Encode?

In contrast to fully connected networks, Convolutional Neural Networks (...

How Can CNNs Use Image Position for Segmentation?

Convolution is an equivariant operation, and image position does not aff...

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs

In this paper, we challenge the common assumption that collapsing the sp...

Understanding Deep Image Representations by Inverting Them

Image representations, from SIFT and Bag of Visual Words to Convolutiona...

On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location

In this paper we challenge the common assumption that convolutional laye...

Learning on the Edge: Explicit Boundary Handling in CNNs

Convolutional neural networks (CNNs) handle the case where filters exten...

Rescaling CNN through Learnable Repetition of Network Parameters

Deeper and wider CNNs are known to provide improved performance for deep...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.