Git Re-Basin: Merging Models modulo Permutation Symmetries

09/11/2022
by   Samuel K. Ainsworth, et al.
0

The success of deep learning is thanks to our ability to solve certain massive non-convex optimization problems with relative ease. Despite non-convex optimization being NP-hard, simple algorithms – often variants of stochastic gradient descent – exhibit surprising effectiveness in fitting large neural networks in practice. We argue that neural network loss landscapes contain (nearly) a single basin, after accounting for all possible permutation symmetries of hidden units. We introduce three algorithms to permute the units of one model to bring them into alignment with units of a reference model. This transformation produces a functionally equivalent set of weights that lie in an approximately convex basin near the reference model. Experimentally, we demonstrate the single basin phenomenon across a variety of model architectures and datasets, including the first (to our knowledge) demonstration of zero-barrier linear mode connectivity between independently trained ResNet models on CIFAR-10 and CIFAR-100. Additionally, we identify intriguing phenomena relating model width and training time to mode connectivity across a variety of models and datasets. Finally, we discuss shortcomings of a single basin theory, including a counterexample to the linear mode connectivity hypothesis.

READ FULL TEXT
research
12/21/2017

Non-convex Optimization for Machine Learning

A vast majority of machine learning algorithms train their models and pe...
research
03/24/2021

Why Do Local Methods Solve Nonconvex Problems?

Non-convex optimization is ubiquitous in modern machine learning. Resear...
research
09/03/2015

Train faster, generalize better: Stability of stochastic gradient descent

We show that parametric models trained by a stochastic gradient method (...
research
08/22/2023

Mode Combinability: Exploring Convex Combinations of Permutation Aligned Models

We explore element-wise convex combinations of two permutation-aligned n...
research
01/24/2019

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

ShuffleNet is a state-of-the-art light weight convolutional neural netwo...
research
05/18/2023

Mode Connectivity in Auction Design

Optimal auction design is a fundamental problem in algorithmic game theo...
research
10/13/2022

Wasserstein Barycenter-based Model Fusion and Linear Mode Connectivity of Neural Networks

Based on the concepts of Wasserstein barycenter (WB) and Gromov-Wasserst...

Please sign up or login with your details

Forgot password? Click here to reset