Neural Anisotropy Directions

06/17/2020
by   Guillermo Ortiz-Jiménez, et al.
0

In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers. To that end, we start by focusing on a very simple problem, i.e., classifying a class of linearly separable distributions, and show that, depending on the direction of the discriminative feature of the distribution, many state-of-the-art deep convolutional neural networks (CNNs) have a surprisingly hard time solving this simple task. We then define as neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture. These vectors, which are specific for each architecture and hence act as a signature, encode the preference of a network to separate the input data based on some particular features. We provide an efficient method to identify NADs for several CNN architectures and thus reveal their directional inductive biases. Furthermore, we show that, for the CIFAR-10 dataset, NADs characterize features used by CNNs to discriminate between different classes.

READ FULL TEXT

page 2

page 4

page 8

page 10

page 16

page 17

page 18

page 38

research
10/28/2022

Introducing topography in convolutional neural networks

Parts of the brain that carry sensory tasks are organized topographicall...
research
12/07/2021

Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training

Recently, vision Transformers (ViTs) are developing rapidly and starting...
research
06/17/2022

Maximum Class Separation as Inductive Bias in One Matrix

Maximizing the separation between classes constitutes a well-known induc...
research
04/05/2022

OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses

Dataset bias and spurious correlations can significantly impair generali...
research
04/27/2021

Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?

Pruning methods can considerably reduce the size of artificial neural ne...
research
02/15/2020

Hold me tight! Influence of discriminative features on deep network boundaries

Important insights towards the explainability of neural networks and the...
research
05/15/2023

Theoretical Analysis of Inductive Biases in Deep Convolutional Networks

In this paper, we study the inductive biases in convolutional neural net...

Please sign up or login with your details

Forgot password? Click here to reset