MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate

02/21/2022
by   Aixiang, et al.
0

The fluctuation effect of gradient expectation and variance caused by parameter update between consecutive iterations is neglected or confusing by current mainstream gradient optimization algorithms.Using this fluctuation effect, combined with the stratified sampling strategy, this paper designs a novel Memory Stochastic sTratified Gradient Descend(MSTGD) algorithm with an exponential convergence rate. Specifically, MSTGD uses two strategies for variance reduction: the first strategy is to perform variance reduction according to the proportion p of used historical gradient, which is estimated from the mean and variance of sample gradients before and after iteration, and the other strategy is stratified sampling by category. The statistic G̅_mst designed under these two strategies can be adaptively unbiased, and its variance decays at a geometric rate. This enables MSTGD based on G̅_mst to obtain an exponential convergence rate of the form λ^2(k-k_0)(λ∈ (0,1),k is the number of iteration steps,λ is a variable related to proportion p).Unlike most other algorithms that claim to achieve an exponential convergence rate, the convergence rate is independent of parameters such as dataset size N, batch size n, etc., and can be achieved at a constant step size.Theoretical and experimental results show the effectiveness of MSTGD

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset