Implicit Regularization in Deep Learning: A View from Function Space

08/03/2020
by   Aristide Baratin, et al.
39

We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a possible regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. By extrapolating a new analysis of Rademacher complexity bounds in linear models, we propose and study a new heuristic complexity measure for neural networks which captures this phenomenon, in terms of sequences of tangent kernel classes along in the learning trajectories.

READ FULL TEXT

page 7

page 8

page 9

page 21

page 23

01/28/2022

Limitation of characterizing implicit regularization by data-independent functions

In recent years, understanding the implicit regularization of neural net...
10/22/2021

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Modern Deep Neural Networks (DNNs) exhibit impressive generalization pro...
10/20/2021

Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

We theoretically analyze the Feedback Alignment (FA) algorithm, an effic...
01/13/2020

On implicit regularization: Morse functions and applications to matrix factorization

In this paper, we revisit implicit regularization from the ground up usi...
07/18/2022

Implicit Regularization with Polynomial Growth in Deep Tensor Factorization

We study the implicit regularization effects of deep learning in tensor ...
06/15/2020

On the training dynamics of deep networks with L_2 regularization

We study the role of L_2 regularization in deep learning, and uncover si...
04/05/2017

On Generalization and Regularization in Deep Learning

Why do large neural network generalize so well on complex tasks such as ...

Code Repositories