Switch and Conquer: Efficient Algorithms By Switching Stochastic Gradient Oracles For Decentralized Saddle Point Problems

09/02/2023
by   Chhavi Sharma, et al.
0

We consider a class of non-smooth strongly convex-strongly concave saddle point problems in a decentralized setting without a central server. To solve a consensus formulation of problems in this class, we develop an inexact primal dual hybrid gradient (inexact PDHG) procedure that allows generic gradient computation oracles to update the primal and dual variables. We first investigate the performance of inexact PDHG with stochastic variance reduction gradient (SVRG) oracle. Our numerical study uncovers a significant phenomenon of initial conservative progress of iterates of IPDHG with SVRG oracle. To tackle this, we develop a simple and effective switching idea, where a generalized stochastic gradient (GSG) computation oracle is employed to hasten the iterates' progress to a saddle point solution during the initial phase of updates, followed by a switch to the SVRG oracle at an appropriate juncture. The proposed algorithm is named Decentralized Proximal Switching Stochastic Gradient method with Compression (C-DPSSG), and is proven to converge to an $\epsilon$-accurate saddle point solution with linear rate. Apart from delivering highly accurate solutions, our study reveals that utilizing the best convergence phases of GSG and SVRG oracles makes C-DPSSG well suited for obtaining solutions of low/medium accuracy faster, useful for certain applications. Numerical experiments on two benchmark machine learning applications show C-DPSSG's competitive performance which validate our theoretical findings. The codes used in the experiments can be found \href{https://github.com/chhavisharma123/C-DPSSG-CDC2023}{here}.

READ FULL TEXT
research
05/28/2022

Stochastic Gradient Methods with Compressed Communication for Decentralized Saddle Point Problems

We propose two stochastic gradient algorithms to solve a class of saddle...
research
06/15/2017

Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications

We propose a stochastic extension of the primal-dual hybrid gradient alg...
research
02/12/2021

A hybrid variance-reduced method for decentralized stochastic non-convex optimization

This paper considers decentralized stochastic optimization over a networ...
research
07/14/2023

Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems

In this paper, we consider the decentralized, stochastic nonconvex stron...
research
02/01/2022

Decentralized Stochastic Variance Reduced Extragradient Method

This paper studies decentralized convex-concave minimax optimization pro...
research
07/22/2019

Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization

We consider a generic empirical composition optimization problem, where ...
research
01/28/2019

Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums

In this work, we study the problem of minimizing the sum of strongly con...

Please sign up or login with your details

Forgot password? Click here to reset