Implicit Regularization in Deep Learning: A View from Function Space

08/03/2020
by   Aristide Baratin, et al.
39

We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a possible regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. By extrapolating a new analysis of Rademacher complexity bounds in linear models, we propose and study a new heuristic complexity measure for neural networks which captures this phenomenon, in terms of sequences of tangent kernel classes along in the learning trajectories.

READ FULL TEXT

page 7

page 8

page 9

page 21

page 23

research
01/28/2022

Limitation of characterizing implicit regularization by data-independent functions

In recent years, understanding the implicit regularization of neural net...
research
10/22/2021

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Modern Deep Neural Networks (DNNs) exhibit impressive generalization pro...
research
10/20/2021

Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

We theoretically analyze the Feedback Alignment (FA) algorithm, an effic...
research
09/27/2022

Why neural networks find simple solutions: the many regularizers of geometric complexity

In many contexts, simpler models are preferable to more complex models a...
research
12/09/2020

Implicit Regularization in ReLU Networks with the Square Loss

Understanding the implicit regularization (or implicit bias) of gradient...
research
02/19/2021

Implicit Regularization in Tensor Factorization

Implicit regularization in deep learning is perceived as a tendency of g...
research
08/29/2023

An Adaptive Tangent Feature Perspective of Neural Networks

In order to better understand feature learning in neural networks, we pr...

Please sign up or login with your details

Forgot password? Click here to reset