Distributed stochastic projection-free solver for constrained optimization

04/22/2022
by   Xia Jiang, et al.
0

This paper proposes a distributed stochastic projection-free algorithm for large-scale constrained finite-sum optimization whose constraint set is complicated such that the projection onto the constraint set can be expensive. The global cost function is allocated to multiple agents, each of which computes its local stochastic gradients and communicates with its neighbors to solve the global problem. Stochastic gradient methods enable low computational cost, while they are hard and slow to converge due to the variance caused by random sampling. To construct a convergent distributed stochastic projection-free algorithm, this paper incorporates a variance reduction technique and gradient tracking technique in the Frank-Wolfe update. We develop a sampling rule for the variance reduction technique to reduce the variance introduced by stochastic gradients. Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and non-convex objective functions. By comparative simulations, we demonstrate the convergence and computational efficiency of the proposed algorithm.

READ FULL TEXT
research
06/28/2021

Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization

This paper proposes a distributed stochastic algorithm with variance red...
research
09/29/2020

Projection-Free Adaptive Gradients for Large-Scale Optimization

The complexity in large-scale optimization can lie in both handling the ...
research
02/17/2019

Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization

How can we efficiently mitigate the overhead of gradient communications ...
research
07/26/2021

Asynchronous Distributed Reinforcement Learning for LQR Control via Zeroth-Order Block Coordinate Descent

Recently introduced distributed zeroth-order optimization (ZOO) algorith...
research
04/23/2020

Variance reduction for distributed stochastic gradient MCMC

Stochastic gradient MCMC methods, such as stochastic gradient Langevin d...
research
02/09/2020

Linearly Convergent Algorithm with Variance Reduction for Distributed Stochastic Optimization

This paper considers a distributed stochastic strongly convex optimizati...
research
06/24/2020

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become a...

Please sign up or login with your details

Forgot password? Click here to reset