Mind the Pad – CNNs can Develop Blind Spots

10/05/2020
by   Bilal Alsallakh, et al.
0

We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, this mechanism can apply the padding unevenly, leading to asymmetries in the learned weights. We demonstrate how such bias can be detrimental to certain tasks such as small object detection: the activation is suppressed if the stimulus lies in the impacted area, leading to blind spots and misdetection. We propose solutions to mitigate spatial bias and demonstrate how they can improve model accuracy.

READ FULL TEXT

page 2

page 3

page 6

page 8

page 12

page 13

page 14

page 15

research
03/11/2019

Accuracy Booster: Performance Boosting using Feature Map Re-calibration

Convolution Neural Networks (CNN) have been extremely successful in solv...
research
05/27/2019

SpecNet: Spectral Domain Convolutional Neural Network

The memory consumption of most Convolutional Neural Network (CNN) archit...
research
01/07/2018

SBNet: Sparse Blocks Network for Fast Inference

Conventional deep convolutional neural networks (CNNs) apply convolution...
research
10/10/2022

ARUBA: An Architecture-Agnostic Balanced Loss for Aerial Object Detection

Deep neural networks tend to reciprocate the bias of their training data...
research
06/07/2021

Efficient Training of Visual Transformers with Small-Size Datasets

Visual Transformers (VTs) are emerging as an architectural paradigm alte...
research
06/19/2020

From Discrete to Continuous Convolution Layers

A basic operation in Convolutional Neural Networks (CNNs) is spatial res...

Please sign up or login with your details

Forgot password? Click here to reset