A general framework for decentralized optimization with first-order methods

09/12/2020
by   Ran Xin, et al.
0

Decentralized optimization to minimize a finite sum of functions over a network of nodes has been a significant focus within control and signal processing research due to its natural relevance to optimal control and signal estimation problems. More recently, the emergence of sophisticated computing and large-scale data science needs have led to a resurgence of activity in this area. In this article, we discuss decentralized first-order gradient methods, which have found tremendous success in control, signal processing, and machine learning problems, where such methods, due to their simplicity, serve as the first method of choice for many complex inference and training tasks. In particular, we provide a general framework of decentralized first-order methods that is applicable to undirected and directed communication networks alike, and show that much of the existing work on optimization and consensus can be related explicitly to this framework. We further extend the discussion to decentralized stochastic first-order methods that rely on stochastic gradients at each node and describe how local variance reduction schemes, previously shown to have promise in the centralized settings, are able to improve the performance of decentralized methods when combined with what is known as gradient tracking. We motivate and demonstrate the effectiveness of the corresponding methods in the context of machine learning and signal processing problems that arise in decentralized environments.

READ FULL TEXT

page 1

page 18

research
02/13/2020

Gradient tracking and variance reduction for decentralized optimization and machine learning

Decentralized methods to solve finite-sum minimization problems are impo...
research
07/23/2019

Decentralized Stochastic First-Order Methods for Large-scale Machine Learning

Decentralized consensus-based optimization is a general computational fr...
research
10/13/2019

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

Many modern large-scale machine learning problems benefit from decentral...
research
05/31/2019

Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

Machine learning algorithms typically rely on optimization subroutines a...
research
03/06/2021

Decentralized Langevin Dynamics over a Directed Graph

The prevalence of technologies in the space of the Internet of Things an...
research
10/05/2022

Personalized Decentralized Bilevel Optimization over Stochastic and Directed Networks

While personalization in distributed learning has been extensively studi...
research
06/25/2020

Dual-Free Stochastic Decentralized Optimization with Variance Reduction

We consider the problem of training machine learning models on distribut...

Please sign up or login with your details

Forgot password? Click here to reset