A Fast Distributed Proximal-Gradient Method

10/08/2012
by   Annie I. Chen, et al.
0

We present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network. Using Nesterov-type acceleration techniques and multiple communication steps per iteration, we show that this method converges at the rate 1/k (where k is the number of communication rounds between the agents), which is faster than the convergence rate of the existing distributed methods for solving this problem. The superior convergence rate of our method is also verified by numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Zero-Order One-Point Estimate with Distributed Stochastic Gradient-Tracking Technique

In this work, we consider a distributed multi-agent stochastic optimizat...
research
11/06/2021

Distributed stochastic proximal algorithm with random reshuffling for non-smooth finite-sum optimization

The non-smooth finite-sum minimization is a fundamental problem in machi...
research
04/16/2021

Distributed TD(0) with Almost No Communication

We provide a new non-asymptotic analysis of distributed TD(0) with linea...
research
07/29/2021

DCG: Distributed Conjugate Gradient for Efficient Linear Equations Solving

Distributed algorithms to solve linear equations in multi-agent networks...
research
01/21/2022

High-Dimensional Inference over Networks: Linear Convergence and Statistical Guarantees

We study sparse linear regression over a network of agents, modeled as a...
research
03/15/2018

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

Distributed sparse learning with a cluster of multiple machines has attr...
research
06/21/2023

Distributed Random Reshuffling Methods with Improved Convergence

This paper proposes two distributed random reshuffling methods, namely G...

Please sign up or login with your details

Forgot password? Click here to reset