A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization

05/20/2019 ∙ by Sulaiman A. Alghunaim, et al. ∙ 0

Decentralized optimization is a promising paradigm that finds various applications in engineering and machine learning. This work studies decentralized composite optimization problems with a non-smooth regularization term. Most existing gradient-based proximal decentralized methods are shown to converge to the desired solution with sublinear rates, and it remains unclear how to prove the linear convergence for this family of methods when the objective function is strongly convex. To tackle this problem, this work considers the non-smooth regularization term to be common across all networked agents, which is the case for most centralized machine learning implementations. Under this scenario, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers some well known decentralized algorithms such as EXTRA and DIGing.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.