D-SPIDER-SFO: A Decentralized Optimization Algorithm with Faster Convergence Rate for Nonconvex Problems

11/28/2019
by   Taoxing Pan, et al.
15

Decentralized optimization algorithms have attracted intensive interests recently, as it has a balanced communication pattern, especially when solving large-scale machine learning problems. Stochastic Path Integrated Differential Estimator Stochastic First-Order method (SPIDER-SFO) nearly achieves the algorithmic lower bound in certain regimes for nonconvex problems. However, whether we can find a decentralized algorithm which achieves a similar convergence rate to SPIDER-SFO is still unclear. To tackle this problem, we propose a decentralized variant of SPIDER-SFO, called decentralized SPIDER-SFO (D-SPIDER-SFO). We show that D-SPIDER-SFO achieves a similar gradient computation cost—that is, O(ϵ^-3) for finding an ϵ-approximate first-order stationary point—to its centralized counterpart. To the best of our knowledge, D-SPIDER-SFO achieves the state-of-the-art performance for solving nonconvex optimization problems on decentralized networks in terms of the computational cost. Experiments on different network configurations demonstrate the efficiency of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2023

Stochastic Multi-Level Compositional Optimization Algorithms over Networks with Level-Independent Convergence Rate

Stochastic multi-level compositional optimization problems cover many ne...
research
05/25/2018

Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication

Recently, the decentralized optimization problem is attracting growing a...
research
05/17/2023

Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression

Achieving communication efficiency in decentralized machine learning has...
research
03/24/2021

Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order Coordinate Method

This paper investigates the stochastic distributed nonconvex optimizatio...
research
05/04/2021

GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning

Decentralized nonconvex optimization has received increasing attention i...
research
02/11/2022

Fast and Robust Sparsity Learning over Networks: A Decentralized Surrogate Median Regression Approach

Decentralized sparsity learning has attracted a significant amount of at...
research
03/31/2020

Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization

Rapid advances in data collection and processing capabilities have allow...

Please sign up or login with your details

Forgot password? Click here to reset