Is Robustness To Transformations Driven by Invariant Neural Representations?

06/30/2020
by   Syed Suleman Abbas Zaidi, et al.
5

Deep Convolutional Neural Networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g. blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. Yet, to what extent this hypothesis holds true is an outstanding question, as including transformations in the training set could lead to properties different from invariance, e.g. parts of the network could be specialized to recognize either transformed or non-transformed images. In this paper, we analyze the conditions under which invariance emerges. To do so, we leverage that invariant representations facilitate robustness to transformations for object categories that are not seen transformed during training. Our results with state-of-the-art DCNNs indicate that invariant representations strengthen as the number of transformed categories in the training set is increased. This is much more prominent with local transformations such as blurring and high-pass filtering, compared to geometric transformations such as rotation and thinning, that entail changes in the spatial arrangement of the object. Our results contribute to a better understanding of invariant representations in deep learning, and the conditions under which invariance spontaneously emerges.

READ FULL TEXT

page 2

page 5

page 12

page 13

research
07/23/2015

Manitest: Are classifiers really invariant?

Invariance to geometric transformations is a highly desirable property o...
research
04/07/2023

Domain Generalization In Robust Invariant Representation

Unsupervised approaches for learning representations invariant to common...
research
04/12/2020

Feature Lenses: Plug-and-play Neural Modules for Transformation-Invariant Visual Representations

Convolutional Neural Networks (CNNs) are known to be brittle under vario...
research
04/23/2022

Transformation Invariant Cancerous Tissue Classification Using Spatially Transformed DenseNet

In this work, we introduce a spatially transformed DenseNet architecture...
research
11/18/2015

Unitary-Group Invariant Kernels and Features from Transformed Unlabeled Data

The study of representations invariant to common transformations of the ...
research
06/19/2020

Deep Transformation-Invariant Clustering

Recent advances in image clustering typically focus on learning better d...

Please sign up or login with your details

Forgot password? Click here to reset