All Neural Networks are Created Equal

05/26/2019
by   Guy Hacohen, et al.
0

One of the unresolved questions in the context of deep learning is the triumph of GD based optimization, which is guaranteed to converge to one of many local minima. To shed light on the nature of the solutions that are thus being discovered, we investigate the ensemble of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches. Surprisingly, we observe that these solutions are in fact very similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all. Moreover, all the networks seem to share the same learning dynamics, whereby initially the same train and test examples are incorporated into the learnt model, followed by other examples which are learnt in roughly the same order. When different neural network architectures are compared, the same learning dynamics is observed even when one architecture is significantly stronger than the other and achieves higher accuracy. Finally, when investigating other methods that involve the gradual refinement of a solution, such as boosting, once again we see the same learning pattern. In all cases, it appears as if all the classifiers start by learning to classify correctly the same train and test examples, while the more powerful classifiers continue to learn to classify correctly additional examples. These results are incredibly robust, observed for a large variety of architectures, hyperparameters and different datasets of images. Thus we observe that different classification solutions may be discovered by different means, but typically they evolve in roughly the same manner and demonstrate a similar success and failure behavior. For a given dataset, such behavior seems to be strongly correlated with effective generalization, while the induced ranking of examples may reflect inherent structure in the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2021

Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability

Deep neural network image classifiers are known to be susceptible not on...
research
02/23/2023

Random Teachers are Good Teachers

In this work, we investigate the implicit regularization induced by teac...
research
12/12/2018

An Empirical Study of Example Forgetting during Deep Neural Network Learning

Inspired by the phenomenon of catastrophic forgetting, we investigate th...
research
02/10/2022

Understanding Rare Spurious Correlations in Neural Networks

Neural networks are known to use spurious correlations for classificatio...
research
10/26/2022

Characterizing Datapoints via Second-Split Forgetting

Researchers investigating example hardness have increasingly focused on ...
research
07/15/2019

Batch-Shaped Channel Gated Networks

We present a method for gating deep-learning architectures on a fine-gra...

Please sign up or login with your details

Forgot password? Click here to reset