Simplicity Bias in 1-Hidden Layer Neural Networks

02/01/2023
by   Depen Morwani, et al.
0

Recent works have demonstrated that neural networks exhibit extreme simplicity bias(SB). That is, they learn only the simplest features to solve a task at hand, even in the presence of other, more robust but more complex features. Due to the lack of a general and rigorous definition of features, these works showcase SB on semi-synthetic datasets such as Color-MNIST, MNIST-CIFAR where defining features is relatively easier. In this work, we rigorously define as well as thoroughly establish SB for one hidden layer neural networks. More concretely, (i) we define SB as the network essentially being a function of a low dimensional projection of the inputs (ii) theoretically, we show that when the data is linearly separable, the network primarily depends on only the linearly separable (1-dimensional) subspace even in the presence of an arbitrarily large number of other, more complex features which could have led to a significantly more robust classifier, (iii) empirically, we show that models trained on real datasets such as Imagenette and Waterbirds-Landbirds indeed depend on a low dimensional projection of the inputs, thereby demonstrating SB on these datasets, iv) finally, we present a natural ensemble approach that encourages diversity in models by training successive models on features not used by earlier models, and demonstrate that it yields models that are significantly more robust to Gaussian noise.

READ FULL TEXT
research
05/24/2022

Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable

Recently, neural networks have been shown to perform exceptionally well ...
research
05/16/2023

A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree Spectral Bias of Neural Networks

Despite the capacity of neural nets to learn arbitrary functions, models...
research
07/08/2018

Separability is not the best goal for machine learning

Neural networks use their hidden layers to transform input data into lin...
research
07/25/2020

Economical ensembles with hypernetworks

Averaging the predictions of many independently trained neural networks ...
research
10/04/2022

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Deep Neural Networks are known to be brittle to even minor distribution ...
research
06/09/2022

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Adversarial examples, which are usually generated for specific inputs wi...

Please sign up or login with your details

Forgot password? Click here to reset