Distributed Random Reshuffling over Networks

12/31/2021
by   Kun Huang, et al.
2

In this paper, we consider the distributed optimization problem where n agents, each possessing a local cost function, collaboratively minimize the average of the local cost functions over a connected network. To solve the problem, we propose a distributed random reshuffling (D-RR) algorithm that combines the classical distributed gradient descent (DGD) method and Random Reshuffling (RR). We show that D-RR inherits the superiority of RR for both smooth strongly convex and smooth nonconvex objective functions. In particular, for smooth strongly convex objective functions, D-RR achieves 𝒪(1/T^2) rate of convergence (here, T counts the total number of iterations) in terms of the squared distance between the iterate and the unique minimizer. When the objective function is assumed to be smooth nonconvex and has Lipschitz continuous component functions, we show that D-RR drives the squared norm of gradient to 0 at a rate of 𝒪(1/T^2/3). These convergence results match those of centralized RR (up to constant factors).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Distributed Random Reshuffling Methods with Improved Convergence

This paper proposes two distributed random reshuffling methods, namely G...
research
09/01/2021

A Gradient Sampling Algorithm for Stratified Maps with Applications to Topological Data Analysis

We introduce a novel gradient descent algorithm extending the well-known...
research
06/02/2021

Gradient Assisted Learning

In distributed settings, collaborations between different entities, such...
research
09/22/2019

Distributed Conjugate Gradient Tracking for Resource Allocation in Unbalanced Networks

This paper proposes a distributed conjugate gradient tracking algorithm ...
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified and Tight Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
08/28/2022

Asynchronous Training Schemes in Distributed Learning with Time Delay

In the context of distributed deep learning, the issue of stale weights ...

Please sign up or login with your details

Forgot password? Click here to reset