Distributed stochastic proximal algorithm with random reshuffling for non-smooth finite-sum optimization

11/06/2021
by   Xia Jiang, et al.
0

The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper develops a distributed stochastic proximal-gradient algorithm with random reshuffling to solve the finite-sum minimization over time-varying multi-agent networks. The objective function is a sum of differentiable convex functions and non-smooth regularization. Each agent in the network updates local variables with a constant step-size by local information and cooperates to seek an optimal solution. We prove that local variable estimates generated by the proposed algorithm achieve consensus and are attracted to a neighborhood of the optimal solution in expectation with an 𝒪(1/T+1/√(T)) convergence rate. In addition, this paper shows that the steady-state error of the objective function can be arbitrarily small by choosing small enough step-sizes. Finally, some comparative simulations are provided to verify the convergence performance of the proposed algorithm.

READ FULL TEXT
research
03/21/2018

A Distributed Stochastic Gradient Tracking Method

In this paper, we study the problem of distributed multi-agent optimizat...
research
02/12/2021

Proximal and Federated Random Reshuffling

Random Reshuffling (RR), also known as Stochastic Gradient Descent (SGD)...
research
10/08/2012

A Fast Distributed Proximal-Gradient Method

We present a distributed proximal-gradient method for optimizing the ave...
research
09/20/2019

Regularized Diffusion Adaptation via Conjugate Smoothing

The purpose of this work is to develop and study a distributed strategy ...
research
05/21/2018

Adaptive Neighborhood Resizing for Stochastic Reachability in Multi-Agent Systems

We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborh...
research
03/19/2017

A Passivity-Based Distributed Reference Governor for Constrained Robotic Networks

This paper focuses on a passivity-based distributed reference governor (...
research
12/20/2021

Decentralized Stochastic Proximal Gradient Descent with Variance Reduction over Time-varying Networks

In decentralized learning, a network of nodes cooperate to minimize an o...

Please sign up or login with your details

Forgot password? Click here to reset