Revisiting the Primal-Dual Method of Multipliers for Optimisation over Centralised Networks

07/19/2021
by   Guoqiang Zhang, et al.
0

The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralized network. We first note that the recently proposed method FedSplit [1] implements PDMM for a centralized network. In [1], Inexact FedSplit (i.e., gradient based FedSplit) was also studied both empirically and theoretically. We identify the cause for the poor reported performance of Inexact FedSplit, which is due to the improper initialisation in the gradient operations at the client side. To fix the issue of Inexact FedSplit, we propose two versions of Inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. AGPDMM accelerates GPDMM at the cost of transmitting two times the number of parameters from the server to each client per iteration compared to GPDMM. We provide a new convergence bound for GPDMM for a class of convex optimisation problems. Our new bounds are tighter than those derived for Inexact FedSplit. We also investigate the update expressions of AGPDMM and SCAFFOLD to find their similarities. It is found that when the number K of gradient steps at the client side per iteration is K=1, both AGPDMM and SCAFFOLD reduce to vanilla gradient descent with proper parameter setup. Experimental results indicate that AGPDMM converges faster than SCAFFOLD when K>1 while GPDMM converges slightly worse than SCAFFOLD.

READ FULL TEXT
research
12/22/2022

Distributed Random Block-Coordinate descent methods for ill-posed composite convex optimisation problems

We develop a novel randomised block coordinate descent primal-dual algor...
research
07/08/2022

Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox

Inspired by a recent breakthrough of Mishchenko et al (2022), who for th...
research
02/19/2018

Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning

Constrained Markov Decision Process (CMDP) is a natural framework for re...
research
08/07/2023

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

Stochastic sequential quadratic optimization (SQP) methods for solving c...
research
07/08/2015

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization pro...
research
06/11/2020

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method

We introduce a framework for designing primal methods under the decentra...
research
05/28/2017

Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation

Bandit based optimisation has a remarkable advantage over gradient based...

Please sign up or login with your details

Forgot password? Click here to reset