On the Divergence of Decentralized Non-Convex Optimization

06/20/2020
by   Mingyi Hong, et al.
0

We study a generic class of decentralized algorithms in which N agents jointly optimize the non-convex objective f(u):=1/N∑_i=1^Nf_i(u), while only communicating with their neighbors. This class of problems has become popular in modeling many signal processing and machine learning applications, and many efficient algorithms have been proposed. However, by constructing some counter-examples, we show that when certain local Lipschitz conditions (LLC) on the local function gradient ∇ f_i's are not satisfied, most of the existing decentralized algorithms diverge, even if the global Lipschitz condition (GLC) is satisfied, where the sum function f has Lipschitz gradient. This observation raises an important open question: How to design decentralized algorithms when the LLC, or even the GLC, is not satisfied? To address the above question, we design a first-order algorithm called Multi-stage gradient tracking algorithm (MAGENTA), which is capable of computing stationary solutions with neither the LLC nor the GLC. In particular, we show that the proposed algorithm converges sublinearly to certain ϵ-stationary solution, where the precise rate depends on various algorithmic and problem parameters. In particular, if the local function f_i's are Qth order polynomials, then the rate becomes O(1/ϵ^Q-1). Such a rate is tight for the special case of Q=2 where each f_i satisfies LLC. To our knowledge, this is the first attempt that studies decentralized non-convex optimization problems with neither the LLC nor the GLC.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/08/2018

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems ofte...
04/27/2022

Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective

Distributed algorithms have been playing an increasingly important role ...
03/27/2019

Decomposition of non-convex optimization via bi-level distributed ALADIN

Decentralized optimization algorithms are important in different context...
06/28/2021

Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization

This paper proposes a distributed stochastic algorithm with variance red...
11/07/2020

A fast randomized incremental gradient method for decentralized non-convex optimization

We study decentralized non-convex finite-sum minimization problems descr...
10/30/2019

Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

Under appropriate cooperation protocols and parameter choices, fully dec...