The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks

by   Rahim Entezari, et al.

In this paper, we conjecture that if the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them. Although it is a bold conjecture, we show how extensive empirical attempts fall short of refuting it. We further provide a preliminary theoretical result to support our conjecture. Our conjecture has implications for lottery ticket hypothesis, distributed training, and ensemble methods.


page 2

page 6

page 22


A conjecture on permutation trinomials over finite fields of characteristic two

In this paper, by analyzing the quadratic factors of an 11-th degree pol...

Random initialisations performing above chance and how to find them

Neural networks trained with stochastic gradient descent (SGD) starting ...

A short proof of a conjecture by Ben-Akiva and Lerman about the nested logit model

We provide a short proof of a result by Cardell (1997) on a conjecture o...

Burstein's permutation conjecture, Hong and Li's inversion sequence conjecture, and restricted Eulerian distributions

Recently, Hong and Li launched a systematic study of length-four pattern...

Further results on Hendry's Conjecture

Recently, a conjecture due to Hendry was disproved which stated that eve...

Permutation Invariance of Deep Neural Networks with ReLUs

Consider a deep neural network (DNN) that is being used to suggest the d...

Towards Diversity-tolerant RDF-stores

We propose a novel approach to designing RDF-stores with the goal of imp...