Adaptive Stochastic Optimization

01/18/2020
by   Frank E. Curtis, et al.
0

Optimization lies at the heart of machine learning and signal processing. Contemporary approaches based on the stochastic gradient method are non-adaptive in the sense that their implementation employs prescribed parameter values that need to be tuned for each application. This article summarizes recent research and motivates future work on adaptive stochastic optimization methods, which have the potential to offer significant computational savings when training large-scale systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/25/2015

Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayes...
research
07/05/2013

Stochastic Optimization of PCA with Capped MSG

We study PCA as a stochastic optimization problem and propose a novel st...
research
01/19/2015

Microscopic Advances with Large-Scale Learning: Stochastic Optimization for Cryo-EM

Determining the 3D structures of biological molecules is a key problem f...
research
04/02/2022

Application of Stochastic Optimization Techniques to the Unit Commitment Problem – A Review

Due to the established energy production methods contribution to the cli...
research
09/05/2023

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates

This paper introduces PROMISE (Preconditioned Stochastic Optimization Me...
research
01/26/2021

Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

We introduce MADGRAD, a novel optimization method in the family of AdaGr...
research
09/02/2016

SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques

We present SEBOOST, a technique for boosting the performance of existing...

Please sign up or login with your details

Forgot password? Click here to reset