Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be

10/12/2021
by   Valerio Biscione, et al.
0

When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation. It is commonly believed that Convolutional Neural Networks (CNNs) are architecturally invariant to translation thanks to the convolution and/or pooling operations they are endowed with. In fact, several studies have found that these networks systematically fail to recognise new objects on untrained locations. In this work, we test a wide variety of CNNs architectures showing how, apart from DenseNet-121, none of the models tested was architecturally invariant to translation. Nevertheless, all of them could learn to be invariant to translation. We show how this can be achieved by pretraining on ImageNet, and it is sometimes possible with much simpler data sets when all the items are fully translated across the input canvas. At the same time, this invariance can be disrupted by further training due to catastrophic forgetting/interference. These experiments show how pretraining a network on an environment with the right `latent' characteristics (a more naturalistic environment) can result in the network learning deep perceptual rules which would dramatically improve subsequent generalization.

READ FULL TEXT

page 6

page 9

page 11

page 16

page 17

page 19

page 21

page 22

research
04/13/2021

Tracking translation invariance in CNNs

Although Convolutional Neural Networks (CNNs) are widely used, their tra...
research
03/18/2021

Stride and Translation Invariance in CNNs

Convolutional Neural Networks have become the standard for image classif...
research
07/09/2021

SITHCon: A neural network robust to variations in input scaling on the time dimension

In machine learning, convolutional neural networks (CNNs) have been extr...
research
11/25/2019

Translation Insensitive CNNs

We address the problem that state-of-the-art Convolution Neural Networks...
research
11/28/2019

Patch Reordering: a Novel Way to Achieve Rotation and Translation Invariance in Convolutional Neural Networks

Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art ...
research
11/25/2020

Deep Convolutional Neural Networks: A survey of the foundations, selected improvements, and some current applications

Within the world of machine learning there exists a wide range of differ...
research
09/15/2020

3D_DEN: Open-ended 3D Object Recognition using Dynamically Expandable Networks

Service robots, in general, have to work independently and adapt to the ...

Please sign up or login with your details

Forgot password? Click here to reset