Swarming for Faster Convergence in Stochastic Optimization

06/11/2018
by   Shi Pu, et al.
0

We study a distributed framework for stochastic optimization which is inspired by models of collective motion found in nature (e.g., swarming) with mild communication requirements. Specifically, we analyze a scheme in which each one of N > 1 independent threads, implements in a distributed and unsynchronized fashion, a stochastic gradient-descent algorithm which is perturbed by a swarming potential. Assuming the overhead caused by synchronization is not negligible, we show the swarming-based approach exhibits better performance than a centralized algorithm (based upon the average of N observations) in terms of (real-time) convergence speed. We also derive an error bound that is monotone decreasing in network size and connectivity. We characterize the scheme's finite-time performances for both convex and non-convex objective functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent

This paper is concerned with minimizing the average of n cost functions ...
research
05/08/2022

Decentralized Stochastic Optimization with Inherent Privacy Protection

Decentralized stochastic optimization is the basic building block of mod...
research
05/15/2020

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

In this report, we study decentralized stochastic optimization to minimi...
research
03/21/2022

A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima

Non-convex loss functions arise frequently in modern machine learning, a...
research
12/07/2020

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

In this paper, a general stochastic optimization procedure is studied, u...
research
02/07/2022

Variance reduced stochastic optimization over directed graphs with row and column stochastic weights

This paper proposes AB-SAGA, a first-order distributed stochastic optimi...
research
01/30/2023

Distributed Stochastic Optimization under a General Variance Condition

Distributed stochastic optimization has drawn great attention recently d...

Please sign up or login with your details

Forgot password? Click here to reset