Reducing Runtime by Recycling Samples

02/05/2016
by   Jialei Wang, et al.
0

Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available. We demonstrate this empirically for SDCA, SAG and SVRG, studying the optimal sample size one should use, and also uncover be-havior that suggests running SDCA for an integer number of epochs could be wasteful.

READ FULL TEXT
research
10/07/2018

Accelerating Stochastic Gradient Descent Using Antithetic Sampling

(Mini-batch) Stochastic Gradient Descent is a popular optimization metho...
research
12/05/2015

Variance Reduction for Distributed Stochastic Gradient Descent

Variance reduction (VR) methods boost the performance of stochastic grad...
research
12/09/2017

Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent

In this paper, we propose a novel approach to automatically determine th...
research
01/16/2013

Adaptive Importance Sampling for Estimation in Structured Domains

Sampling is an important tool for estimating large, complex sums and int...
research
03/24/2022

Local optimisation of Nyström samples through stochastic gradient descent

We study a relaxed version of the column-sampling problem for the Nyströ...
research
10/13/2022

We need to talk about nonprobability samples

It is well known that, in most circumstances, probability sampling is th...

Please sign up or login with your details

Forgot password? Click here to reset