Neural Networks as Kernel Learners: The Silent Alignment Effect

10/29/2021
by   Alexander Atanasov, et al.
0

Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the tangent kernel of a network evolves in eigenstructure while small and before the loss appreciably decreases, and grows only in overall scale afterwards. We show that such an effect takes place in homogenous neural networks with small initialization and whitened data. We provide an analytical treatment of this effect in the linear network case. In general, we find that the kernel develops a low-rank contribution in the early phase of training, and then evolves in overall scale, yielding a function equivalent to a kernel regression solution with the final network's tangent kernel. The early spectral learning of the kernel depends on both depth and on relative learning rates in each layer. We also demonstrate that non-whitened data can weaken the silent alignment effect.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2021

Rapid Feature Evolution Accelerates Learning in Neural Networks

Neural network (NN) training and generalization in the infinite-width li...
research
08/29/2023

An Adaptive Tangent Feature Perspective of Neural Networks

In order to better understand feature learning in neural networks, we pr...
research
09/14/2023

The kernel-balanced equation for deep neural networks

Deep neural networks have shown many fruitful applications in this decad...
research
10/05/2022

The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks

It is unclear how changing the learning rule of a deep neural network al...
research
09/04/2022

On Kernel Regression with Data-Dependent Kernels

The primary hyperparameter in kernel regression (KR) is the choice of ke...
research
07/24/2019

A Fine-Grained Spectral Perspective on Neural Networks

Are neural networks biased toward simple functions? Does depth always he...
research
07/27/2023

Speed Limits for Deep Learning

State-of-the-art neural networks require extreme computational power to ...

Please sign up or login with your details

Forgot password? Click here to reset