Stochastic Gradient Methods with Compressed Communication for Decentralized Saddle Point Problems

by   Chhavi Sharma, et al.

We propose two stochastic gradient algorithms to solve a class of saddle-point problems in a decentralized setting (without a central server). The proposed algorithms are the first to achieve sub-linear/linear computation and communication complexities using respectively stochastic gradient/stochastic variance reduced gradient oracles with compressed information exchange to solve non-smooth strongly-convex strongly-concave saddle-point problems in decentralized setting. Our first algorithm is a Restart-based Decentralized Proximal Stochastic Gradient method with Compression (C-RDPSG) for general stochastic settings. We provide rigorous theoretical guarantees of C-RDPSG with gradient computation complexity and communication complexity of order 𝒪( (1+δ)^4 1/L^2κ_f^2κ_g^2 1/ϵ ), to achieve an ϵ-accurate saddle-point solution, where δ denotes the compression factor, κ_f and κ_g denote respectively the condition numbers of objective function and communication graph, and L denotes the smoothness parameter of the smooth part of the objective function. Next, we present a Decentralized Proximal Stochastic Variance Reduced Gradient algorithm with Compression (C-DPSVRG) for finite sum setting which exhibits gradient computation complexity and communication complexity of order 𝒪((1+δ)κ_f^2 κ_g log(1/ϵ)). Extensive numerical experiments show competitive performance of the proposed algorithms and provide support to the theoretical results obtained.


page 1

page 2

page 3

page 4


Decentralized Composite Optimization with Compression

Decentralized optimization and communication compression have exhibited ...

An Empirical Study on Compressed Decentralized Stochastic Gradient Algorithms with Overparameterized Models

This paper considers decentralized optimization with application to mach...

Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization

We study stochastic decentralized optimization for the problem of traini...

Decentralized Stochastic Variance Reduced Extragradient Method

This paper studies decentralized convex-concave minimax optimization pro...

Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization

Stochastic variance-reduced gradient (SVRG) algorithms have been shown t...

A Decentralized Proximal Point-type Method for Saddle Point Problems

In this paper, we focus on solving a class of constrained non-convex non...