Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

09/15/2019
by   Weiyu Li, et al.
0

In this paper, we propose a communication- and computation-efficient algorithm to solve a convex consensus optimization problem defined over a decentralized network. A remarkable existing algorithm to solve this problem is the alternating direction method of multipliers (ADMM), in which at every iteration every node updates its local variable through combining neighboring variables and solving an optimization subproblem. The proposed algorithm, called as COmmunication-censored Linearized ADMM (COLA), leverages a linearization technique to reduce the iteration-wise computation cost of ADMM and uses a communication-censoring strategy to alleviate the communication cost. To be specific, COLA introduces successive linearization approximations to the local cost functions such that the resultant computation is first-order and light-weight. Since the linearization technique slows down the convergence speed, COLA further adopts the communication-censoring strategy to avoid transmissions of less informative messages. A node is allowed to transmit only if the distance between the current local variable and its previously transmitted one is larger than a censoring threshold. COLA is proven to be convergent when the local cost functions have Lipschitz continuous gradients and the censoring threshold is summable. When the local cost functions are further strongly convex, we establish the linear (sublinear) convergence rate of COLA, given that the censoring threshold linearly (sublinearly) decays to 0. Numerical experiments corroborate with the theoretical findings and demonstrate the satisfactory communication-computation tradeoff of COLA.

READ FULL TEXT
research
02/05/2022

Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting

The challenge of communication-efficient distributed optimization has at...
research
09/29/2020

Distributed ADMM with Synergetic Communication and Computation

In this paper, we propose a novel distributed alternating direction meth...
research
09/14/2020

Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM

In this paper, we propose a communication-efficiently decentralized mach...
research
05/28/2019

Distributed Linear Model Clustering over Networks: A Tree-Based Fused-Lasso ADMM Approach

In this work, we consider to improve the model estimation efficiency by ...
research
10/07/2018

Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms

Alternating direction method of multiplier (ADMM) is a powerful method t...
research
05/11/2017

Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

We consider the stochastic composition optimization problem proposed in ...
research
08/04/2022

QC-ODKLA: Quantized and Communication-Censored Online Decentralized Kernel Learning via Linearized ADMM

This paper focuses on online kernel learning over a decentralized networ...

Please sign up or login with your details

Forgot password? Click here to reset