Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

01/15/2020
by   Satoshi Takabe, et al.
0

Deep unfolding is a promising deep-learning technique in which an iterative algorithm is unrolled to a deep network architecture with trainable parameters. In the case of gradient descent algorithms, as a result of training process, one often observes acceleration of convergence speed with learned non-constant step size parameters whose behavior is not intuitive nor interpretable from conventional theory. In this paper, we provide theoretical interpretation of learned step size of deep-unfolded gradient descent (DUGD). We first prove that training processes of DUGD reduces not only the mean squared error loss but also the spectral radius regarding the convergence rate. Next, it is shown that minimizing an upper bound of the spectral radius naturally leads to the Chebyshev step which is a sequence of the step size based on Chebyshev polynomials. Numerical experiments confirm that Chebyshev steps qualitatively reproduce the learned step size parameters in DUGD, which provides a plausible interpretation of learned parameters. In addition, it is shown that Chebyshev steps achieve the lower bound of the convergence rate of the first order method in a specific limit without learning nor momentum terms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique, whose network arc...
research
06/15/2023

MinMax Networks

While much progress has been achieved over the last decades in neuro-ins...
research
11/30/2021

Trust the Critics: Generatorless and Multipurpose WGANs with Initial Convergence Guarantees

Inspired by ideas from optimal transport theory we present Trust the Cri...
research
04/01/2022

Learning to Accelerate by the Methods of Step-size Planning

Gradient descent is slow to converge for ill-conditioned problems and no...
research
08/05/2019

Extending the step-size restriction for gradient descent to avoid strict saddle points

We provide larger step-size restrictions for which gradient descent base...
research
04/07/2015

From Averaging to Acceleration, There is Only a Step-size

We show that accelerated gradient descent, averaged gradient descent and...
research
02/09/2022

Improving Computational Complexity in Statistical Models with Second-Order Information

It is known that when the statistical models are singular, i.e., the Fis...

Please sign up or login with your details

Forgot password? Click here to reset