Balancing Rates and Variance via Adaptive Batch-Size for Stochastic Optimization Problems

07/02/2020
by   Zhan Gao, et al.
0

Stochastic gradient descent is a canonical tool for addressing stochastic optimization problems, and forms the bedrock of modern machine learning and statistics. In this work, we seek to balance the fact that attenuating step-size is required for exact asymptotic convergence with the fact that constant step-size learns faster in finite time up to an error. To do so, rather than fixing the mini-batch and the step-size at the outset, we propose a strategy to allow parameters to evolve adaptively. Specifically, the batch-size is set to be a piecewise-constant increasing sequence where the increase occurs when a suitable error criterion is satisfied. Moreover, the step-size is selected as that which yields the fastest convergence. The overall algorithm, two scale adaptive (TSA) scheme, is developed for both convex and non-convex stochastic optimization problems. It inherits the exact asymptotic convergence of stochastic gradient method. More importantly, the optimal error decreasing rate is achieved theoretically, as well as an overall reduction in computational cost. Experimentally, we observe that TSA attains a favorable tradeoff relative to standard SGD that fixes the mini-batch and the step-size, or simply allowing one to increase or decrease respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2016

Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...
research
07/03/2020

Variance reduction for Riemannian non-convex optimization with batch size adaptation

Variance reduction techniques are popular in accelerating gradient desce...
research
07/15/2020

On stochastic mirror descent with interacting particles: convergence properties and variance reduction

An open problem in optimization with noisy information is the computatio...
research
07/16/2017

Normalized Gradient with Adaptive Stepsize Method for Deep Neural Network Training

In this paper, we propose a generic and simple algorithmic framework for...
research
05/18/2015

An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization

Mini-batch optimization has proven to be a powerful paradigm for large-s...
research
11/28/2022

Stochastic Steffensen method

Is it possible for a first-order method, i.e., only first derivatives al...
research
11/26/2021

An Optimization Framework for Federated Edge Learning

The optimal design of federated learning (FL) algorithms for solving gen...

Please sign up or login with your details

Forgot password? Click here to reset