
Inexact SARAH Algorithm for Stochastic Optimization
We develop and analyze a variant of variance reducing stochastic gradien...
read it

An Accelerated DFO Algorithm for Finitesum Convex Functions
Derivativefree optimization (DFO) has recently gained a lot of momentum...
read it

Zerothorder Stochastic Compositional Algorithms for RiskAware Learning
We present FreeMESSAGEp, the first zerothorder algorithm for convex me...
read it

Minimizing Finite Sums with the Stochastic Average Gradient
We propose the stochastic average gradient (SAG) method for optimizing t...
read it

SUCAG: Stochastic Unbiased Curvatureaided Gradient Method for Distributed Optimization
We propose and analyze a new stochastic gradient method, which we call S...
read it

Stochastic Compositional Gradient Descent under Compositional constraints
This work studies constrained stochastic optimization problems where the...
read it

Limitations on VarianceReduction and Acceleration Schemes for Finite Sum Optimization
We study the conditions under which one is able to efficiently apply var...
read it
Katyusha Acceleration for Convex FiniteSum Compositional Optimization
Structured problems arise in many applications. To solve these problems, it is important to leverage the structure information. This paper focuses on convex problems with a finitesum compositional structure. Finitesum problems appear as the sample average approximation of a stochastic optimization problem and also arise in machine learning with a huge amount of training data. One popularly used numerical approach for finitesum problems is the stochastic gradient method (SGM). However, the additional compositional structure prohibits easy access to unbiased stochastic approximation of the gradient, so directly applying the SGM to a finitesum compositional optimization problem (COP) is often inefficient. We design new algorithms for solving stronglyconvex and also convex twolevel finitesum COPs. Our design incorporates the Katyusha acceleration technique and adopts the minibatch sampling from both outerlevel and innerlevel finitesum. We first analyze the algorithm for stronglyconvex finitesum COPs. Similar to a few existing works, we obtain linear convergence rate in terms of the expected objective error, and from the convergence rate result, we then establish complexity results of the algorithm to produce an εsolution. Our complexity results have the same dependence on the number of component functions as existing works. However, due to the use of Katyusha acceleration, our results have better dependence on the condition number κ and improve to κ^2.5 from the bestknown κ^3. Finally, we analyze the algorithm for convex finitesum COPs, which uses as a subroutine the algorithm for stronglyconvex finitesum COPs. Again, we obtain better complexity results than existing works in terms of the dependence on ε, improving to ε^2.5 from the bestknown ε^3.
READ FULL TEXT
Comments
There are no comments yet.