Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

by   Satoshi Takabe, et al.
Nagoya Institute of Technology

Deep unfolding is a promising deep-learning technique in which an iterative algorithm is unrolled to a deep network architecture with trainable parameters. In the case of gradient descent algorithms, as a result of training process, one often observes acceleration of convergence speed with learned non-constant step size parameters whose behavior is not intuitive nor interpretable from conventional theory. In this paper, we provide theoretical interpretation of learned step size of deep-unfolded gradient descent (DUGD). We first prove that training processes of DUGD reduces not only the mean squared error loss but also the spectral radius regarding the convergence rate. Next, it is shown that minimizing an upper bound of the spectral radius naturally leads to the Chebyshev step which is a sequence of the step size based on Chebyshev polynomials. Numerical experiments confirm that Chebyshev steps qualitatively reproduce the learned step size parameters in DUGD, which provides a plausible interpretation of learned parameters. In addition, it is shown that Chebyshev steps achieve the lower bound of the convergence rate of the first order method in a specific limit without learning nor momentum terms.


page 1

page 2

page 3

page 4


Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique, whose network arc...

MinMax Networks

While much progress has been achieved over the last decades in neuro-ins...

Trust the Critics: Generatorless and Multipurpose WGANs with Initial Convergence Guarantees

Inspired by ideas from optimal transport theory we present Trust the Cri...

Learning to Accelerate by the Methods of Step-size Planning

Gradient descent is slow to converge for ill-conditioned problems and no...

Extending the step-size restriction for gradient descent to avoid strict saddle points

We provide larger step-size restrictions for which gradient descent base...

From Averaging to Acceleration, There is Only a Step-size

We show that accelerated gradient descent, averaged gradient descent and...

Improving Computational Complexity in Statistical Models with Second-Order Information

It is known that when the statistical models are singular, i.e., the Fis...

Please sign up or login with your details

Forgot password? Click here to reset