-
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Here we study non-convex composite optimization: first, a finite-sum of ...
read it
-
Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning
Recently, research on accelerated stochastic gradient descent methods (e...
read it
-
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization
This paper considers stochastic optimization problems for a large class ...
read it
-
Stochastic Non-convex Ordinal Embedding with Stabilized Barzilai-Borwein Step Size
Learning representation from relative similarity comparisons, often call...
read it
-
Stochastic Conditional Gradient++
In this paper, we develop Stochastic Continuous Greedy++ (SCG++), the fi...
read it
-
Linear Convergence of Adaptive Stochastic Gradient Descent
We prove that the norm version of the adaptive stochastic gradient metho...
read it
-
Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
The focus of this paper is on stochastic variational inequalities (VI) u...
read it
One Sample Stochastic Frank-Wolfe
One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications. However, once we replace the projection operator with a simpler linear program, as is done in the Frank-Wolfe method, both simplicity and stability take a serious hit. The aim of this paper is to bring them back without sacrificing the efficiency. In this paper, we propose the first one-sample stochastic Frank-Wolfe algorithm, called 1-SFW, that avoids the need to carefully tune the batch size, step size, learning rate, and other complicated hyper parameters. In particular, 1-SFW achieves the optimal convergence rate of O(1/ϵ^2) for reaching an ϵ-suboptimal solution in the stochastic convex setting, and a (1-1/e)-ϵ approximate solution for a stochastic monotone DR-submodular maximization problem. Moreover, in a general non-convex setting, 1-SFW finds an ϵ-first-order stationary point after at most O(1/ϵ^3) iterations, achieving the current best known convergence rate. All of this is possible by designing a novel unbiased momentum estimator that governs the stability of the optimization process while using a single sample at each iteration.
READ FULL TEXT
Comments
There are no comments yet.