On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks

09/19/2022
by   Hubert Leterme, et al.
18

In this paper, we aim to improve the mathematical interpretability of convolutional neural networks for image classification. When trained on natural image datasets, such networks tend to learn parameters in the first layer that closely resemble oriented Gabor filters. By leveraging the properties of discrete Gabor-like convolutions, we prove that, under specific conditions, feature maps computed by the subsequent max pooling operator tend to approximate the modulus of complex Gabor-like coefficients, and as such, are stable with respect to certain input shifts. We then compute a probabilistic measure of shift invariance for these layers. More precisely, we show that some filters, depending on their frequency and orientation, are more likely than others to produce stable image representations. We experimentally validate our theory by considering a deterministic feature extractor based on the dual-tree wavelet packet transform, a particular case of discrete Gabor-like decomposition. We demonstrate a strong correlation between shift invariance on the one hand and similarity with complex modulus on the other hand.

READ FULL TEXT

page 1

page 3

page 6

page 10

page 11

research
12/01/2022

From CNNs to Shift-Invariant Twin Wavelet Models

We propose a novel antialiasing method to increase shift invariance in c...
research
07/01/2021

Improving Sound Event Classification by Increasing Shift Invariance in Convolutional Neural Networks

Recent studies have put into question the commonly assumed shift invaria...
research
04/20/2020

Improving correlation method with convolutional neural networks

We present a convolutional neural network for the classification of corr...
research
04/24/2020

Understanding when spatial transformer networks do not support invariance, and what to do about it

Spatial transformer networks (STNs) were designed to enable convolutiona...
research
01/27/2020

Depthwise-STFT based separable Convolutional Neural Networks

In this paper, we propose a new convolutional layer called Depthwise-STF...
research
09/24/2021

Frequency Pooling: Shift-Equivalent and Anti-Aliasing Downsampling

Convolution utilizes a shift-equivalent prior of images, thus leading to...
research
08/23/2021

Convolutional Filtering and Neural Networks with Non Commutative Algebras

In this paper we provide stability results for algebraic neural networks...

Please sign up or login with your details

Forgot password? Click here to reset