Distributed Random Reshuffling Methods with Improved Convergence

06/21/2023
by   Kun Huang, et al.
0

This paper proposes two distributed random reshuffling methods, namely Gradient Tracking with Random Reshuffling (GT-RR) and Exact Diffusion with Random Reshuffling (ED-RR), to solve the distributed optimization problem over a connected network, where a set of agents aim to minimize the average of their local cost functions. Both algorithms invoke random reshuffling (RR) update for each agent, inherit favorable characteristics of RR for minimizing smooth nonconvex objective functions, and improve the performance of previous distributed random reshuffling methods both theoretically and empirically. Specifically, both GT-RR and ED-RR achieve the convergence rate of O(1/[(1-λ)^1/3m^1/3T^2/3]) in driving the (minimum) expected squared norm of the gradient to zero, where T denotes the number of epochs, m is the sample size for each agent, and 1-λ represents the spectral gap of the mixing matrix. When the objective functions further satisfy the Polyak-Łojasiewicz (PL) condition, we show GT-RR and ED-RR both achieve O(1/[(1-λ)mT^2]) convergence rate in terms of the averaged expected differences between the agents' function values and the global minimum value. Notably, both results are comparable to the convergence rates of centralized RR methods (up to constant factors depending on the network topology) and outperform those of previous distributed random reshuffling algorithms. Moreover, we support the theoretical findings with a set of numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

Distributed Random Reshuffling over Networks

In this paper, we consider the distributed optimization problem where n ...
research
01/14/2023

CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence

In this paper, we consider solving the distributed optimization problem ...
research
05/11/2021

Improving the Transient Times for Distributed Stochastic Gradient Methods

We consider the distributed optimization problem where n agents each pos...
research
10/28/2022

Secure Distributed Optimization Under Gradient Attacks

In this paper, we study secure distributed optimization against arbitrar...
research
10/08/2012

A Fast Distributed Proximal-Gradient Method

We present a distributed proximal-gradient method for optimizing the ave...
research
07/26/2021

Provably Accelerated Decentralized Gradient Method Over Unbalanced Directed Graphs

In this work, we consider the decentralized optimization problem in whic...
research
04/05/2021

Self-Healing First-Order Distributed Optimization

In this paper we describe a parameterized family of first-order distribu...

Please sign up or login with your details

Forgot password? Click here to reset