A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints

06/25/2017
by   Bin Hu, et al.
0

We develop a simple routine unifying the analysis of several important recently-developed stochastic optimization methods including SAGA, Finito, and stochastic dual coordinate ascent (SDCA). First, we show an intrinsic connection between stochastic optimization methods and dynamic jump systems, and propose a general jump system model for stochastic optimization methods. Our proposed model recovers SAGA, SDCA, Finito, and SAG as special cases. Then we combine jump system theory with several simple quadratic inequalities to derive sufficient conditions for convergence rate certifications of the proposed jump system model under various assumptions (with or without individual convexity, etc). The derived conditions are linear matrix inequalities (LMIs) whose sizes roughly scale with the size of the training set. We make use of the symmetry in the stochastic optimization methods and reduce these LMIs to some equivalent small LMIs whose sizes are at most 3 by 3. We solve these small LMIs to provide analytical proofs of new convergence rates for SAGA, Finito and SDCA (with or without individual convexity). We also explain why our proposed LMI fails in analyzing SAG. We reveal a key difference between SAG and other methods, and briefly discuss how to extend our LMI analysis for SAG. An advantage of our approach is that the proposed analysis can be automated for a large class of stochastic methods under various assumptions (with or without individual convexity, etc).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2022

A Unified Convergence Theorem for Stochastic Optimization Methods

In this work, we provide a fundamental unified convergence theorem used ...
research
02/09/2021

Local and Global Uniform Convexity Conditions

We review various characterizations of uniform convexity and smoothness ...
research
11/11/2020

Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

In stochastic optimization, particularly in evolutionary computation and...
research
06/12/2020

Stochastic Optimization for Performative Prediction

In performative prediction, the choice of a model influences the distrib...
research
04/10/2022

Rockafellian Relaxation in Optimization under Uncertainty: Asymptotically Exact Formulations

In practice, optimization models are often prone to unavoidable inaccura...
research
06/01/2020

Factorial Powers for Stochastic Optimization

The convergence rates for convex and non-convex optimization methods dep...
research
10/08/2020

Emergent Jaw Predominance in Vocal Development through Stochastic Optimization

Infant vocal babbling strongly relies on jaw oscillations, especially at...

Please sign up or login with your details

Forgot password? Click here to reset