DeepAI AI Chat
Log In Sign Up

L-SVRG and L-Katyusha with Adaptive Sampling

01/31/2022
by   Boxin Zhao, et al.
0

Stochastic gradient-based optimization methods, such as L-SVRG and its accelerated variant L-Katyusha [12], are widely used to train machine learning models. Theoretical and empirical performance of L-SVRG and L-Katyusha can be improved by sampling the observations from a non-uniform distribution [17]. However, to design a desired sampling distribution, Qian et al.[17] rely on prior knowledge of smoothness constants that can be computationally intractable to obtain in practice when the dimension of the model parameter is high. We propose an adaptive sampling strategy for L-SVRG and L-Katyusha that learns the sampling distribution with little computational overhead, while allowing it to change with iterates, and at the same time does not require any prior knowledge on the problem parameters. We prove convergence guarantees for L-SVRG and L-Katyusha for convex objectives when the sampling distribution changes with iterates. These results show that even without prior information, the proposed adaptive sampling strategy matches, and in some cases even surpasses, the performance of the sampling scheme in Qian et al.[17]. Extensive simulations support our theory and the practical utility of the proposed sampling scheme on real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/19/2020

Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

We establish a new convergence analysis of stochastic gradient Langevin ...
12/30/2015

Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

Accelerated coordinate descent is widely used in optimization due to its...
11/03/2022

Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization

We propose an adaptive variance-reduction method, called AdaSpider, for ...
05/19/2020

Exponential ergodicity of mirror-Langevin diffusions

Motivated by the problem of sampling from ill-conditioned log-concave di...
06/08/2017

Distribution-Free One-Pass Learning

In many large-scale machine learning applications, data are accumulated ...
03/02/2018

Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares

In modern data analysis, random sampling is an efficient and widely-used...
07/06/2020

Surprise sampling: improving and extending the local case-control sampling

Fithian and Hastie (2014) proposed a new sampling scheme called local ca...