On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent

by   Shahar Azulay, et al.

Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called "rich regimes". However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.


page 1

page 2

page 3

page 4


Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy

We provide a detailed asymptotic study of gradient flow trajectories and...

Rethinking Initialization of the Sinkhorn Algorithm

Computing an optimal transport (OT) coupling between distributions plays...

Kernel and Rich Regimes in Overparametrized Models

A recent line of work studies overparametrized neural networks in the "k...

Is Feature Diversity Necessary in Neural Network Initialization?

Standard practice in training neural networks involves initializing the ...

On the Inductive Bias of Neural Tangent Kernels

State-of-the-art neural networks are heavily over-parameterized, making ...

Usable Information and Evolution of Optimal Representations During Training

We introduce a notion of usable information contained in the representat...

Shallow Univariate ReLu Networks as Splines: Initialization, Loss Surface, Hessian, Gradient Flow Dynamics

Understanding the learning dynamics and inductive bias of neural network...