Implicit regularization in AI meets generalized hardness of approximation in optimization – Sharp results for diagonal linear networks

07/13/2023
by   Johan S. Wind, et al.
0

Understanding the implicit regularization imposed by neural network architectures and gradient based optimization methods is a key challenge in deep learning and AI. In this work we provide sharp results for the implicit regularization imposed by the gradient flow of Diagonal Linear Networks (DLNs) in the over-parameterized regression setting and, potentially surprisingly, link this to the phenomenon of phase transitions in generalized hardness of approximation (GHA). GHA generalizes the phenomenon of hardness of approximation from computer science to, among others, continuous and robust optimization. It is well-known that the ℓ^1-norm of the gradient flow of DLNs with tiny initialization converges to the objective function of basis pursuit. We improve upon these results by showing that the gradient flow of DLNs with tiny initialization approximates minimizers of the basis pursuit optimization problem (as opposed to just the objective function), and we obtain new and sharp convergence bounds w.r.t. the initialization size. Non-sharpness of our results would imply that the GHA phenomenon would not occur for the basis pursuit optimization problem – which is a contradiction – thus implying sharpness. Moreover, we characterize which ℓ_1 minimizer of the basis pursuit problem is chosen by the gradient flow whenever the minimizer is not unique. Interestingly, this depends on the depth of the DLN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

Implicit Sparse Regularization: The Impact of Depth and Early Stopping

In this paper, we study the implicit bias of gradient descent for sparse...
research
08/09/2023

How to induce regularization in generalized linear models: A guide to reparametrizing gradient flow

In this work, we analyze the relation between reparametrizations of grad...
research
10/20/2021

Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

We theoretically analyze the Feedback Alignment (FA) algorithm, an effic...
research
08/31/2022

Incremental Learning in Diagonal Linear Networks

Diagonal linear networks (DLNs) are a toy simplification of artificial n...
research
05/13/2021

On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks

Neural networks trained via gradient descent with random initialization ...
research
06/24/2023

G-TRACER: Expected Sharpness Optimization

We propose a new regularization scheme for the optimization of deep lear...
research
06/09/2021

From inexact optimization to learning via gradient concentration

Optimization was recently shown to control the inductive bias in a learn...

Please sign up or login with your details

Forgot password? Click here to reset