Random initialisations performing above chance and how to find them

09/15/2022
by   Frederik Benzing, et al.
0

Neural networks trained with stochastic gradient descent (SGD) starting from different random initialisations typically find functionally very similar solutions, raising the question of whether there are meaningful differences between different SGD solutions. Entezari et al. recently conjectured that despite different initialisations, the solutions found by SGD lie in the same loss valley after taking into account the permutation invariance of neural networks. Concretely, they hypothesise that any two solutions found by SGD can be permuted such that the linear interpolation between their parameters forms a path without significant increases in loss. Here, we use a simple but powerful algorithm to find such permutations that allows us to obtain direct empirical evidence that the hypothesis is true in fully connected networks. Strikingly, we find that two networks already live in the same loss valley at the time of initialisation and averaging their random, but suitably permuted initialisation performs significantly above chance. In contrast, for convolutional architectures, our evidence suggests that the hypothesis does not hold. Especially in a large learning rate regime, SGD seems to discover diverse modes.

READ FULL TEXT

page 2

page 12

research
02/11/2018

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

Stochastic gradient descent (SGD) is the optimization algorithm of choic...
research
06/05/2018

Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate

Stochastic Gradient Descent (SGD) is a central tool in machine learning....
research
11/15/2022

REPAIR: REnormalizing Permuted Activations for Interpolation Repair

In this paper we look into the conjecture of Entezari et al.(2021) which...
research
06/07/2023

Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning

In this paper, we first present an explanation regarding the common occu...
research
10/12/2021

The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks

In this paper, we conjecture that if the permutation invariance of neura...
research
05/28/2019

SGD on Neural Networks Learns Functions of Increasing Complexity

We perform an experimental study of the dynamics of Stochastic Gradient ...
research
11/08/2022

Black Box Lie Group Preconditioners for SGD

A matrix free and a low rank approximation preconditioner are proposed t...

Please sign up or login with your details

Forgot password? Click here to reset