Improving Computational Complexity in Statistical Models with Second-Order Information

02/09/2022
by   Tongzheng Ren, et al.
14

It is known that when the statistical models are singular, i.e., the Fisher information matrix at the true parameter is degenerate, the fixed step-size gradient descent algorithm takes polynomial number of steps in terms of the sample size n to converge to a final statistical radius around the true parameter, which can be unsatisfactory for the application. To further improve that computational complexity, we consider the utilization of the second-order information in the design of optimization algorithms. Specifically, we study the normalized gradient descent (NormGD) algorithm for solving parameter estimation in parametric statistical models, which is a variant of gradient descent algorithm whose step size is scaled by the maximum eigenvalue of the Hessian matrix of the empirical loss function of statistical models. When the population loss function, i.e., the limit of the empirical loss function when n goes to infinity, is homogeneous in all directions, we demonstrate that the NormGD iterates reach a final statistical radius around the true parameter after a logarithmic number of iterations in terms of n. Therefore, for fixed dimension d, the NormGD algorithm achieves the optimal overall computational complexity 𝒪(n) to reach the final statistical radius. This computational complexity is cheaper than that of the fixed step-size gradient descent algorithm, which is of the order 𝒪(n^τ) for some τ > 1, to reach the same statistical radius. We illustrate our general theory under two statistical models: generalized linear models and mixture models, and experimental results support our prediction with general theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent

We study the statistical and computational complexities of the Polyak st...
research
05/16/2022

An Exponentially Increasing Step-size for Parameter Estimation in Statistical Models

Using gradient descent (GD) with fixed or decaying step-size is standard...
research
05/23/2022

Beyond EM Algorithm on Over-specified Two-Component Location-Scale Gaussian Mixtures

The Expectation-Maximization (EM) algorithm has been predominantly used ...
research
05/22/2020

Instability, Computational Efficiency and Statistical Accuracy

Many statistical estimators are defined as the fixed point of a data-dep...
research
02/01/2020

The Statistical Complexity of Early Stopped Mirror Descent

Recently there has been a surge of interest in understanding implicit re...
research
01/15/2020

Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique in which an iterat...
research
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...

Please sign up or login with your details

Forgot password? Click here to reset