DeepAI AI Chat
Log In Sign Up

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

by   Rui Zhu, et al.
Wuhan University

Many machine learning models, including those with non-smooth regularizers, can be formulated as consensus optimization problems, which can be solved by the alternating direction method of multipliers (ADMM). Many recent efforts have been made to develop asynchronous distributed ADMM to handle large amounts of training data. However, all existing asynchronous distributed ADMM methods are based on full model updates and require locking all global model parameters to handle concurrency, which essentially serializes the updates from different workers. In this paper, we present a novel block-wise, asynchronous and distributed ADMM algorithm, which allows different blocks of model parameters to be updated in parallel. The lock-free block-wise algorithm may greatly speedup sparse optimization problems, a common scenario in reality, in which most model updates only modify a subset of all decision variables. We theoretically prove the convergence of our proposed algorithm to stationary points for non-convex general form consensus problems with possibly non-smooth regularizers. We implement the proposed ADMM algorithm on the Parameter Server framework and demonstrate its convergence and near-linear speedup performance as the number of workers increases.


page 1

page 2

page 3

page 4


Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization

We study stochastic algorithms for solving non-convex optimization probl...

Asynchronous ADMM for Distributed Non-Convex Optimization in Power Systems

Large scale, non-convex optimization problems arising in many complex ne...

A Model Parallel Proximal Stochastic Gradient Algorithm for Partially Asynchronous Systems

Large models are prevalent in modern machine learning scenarios, includi...

A Distributed Algorithm for Measure-valued Optimization with Additive Objective

We propose a distributed nonparametric algorithm for solving measure-val...

DJAM: distributed Jacobi asynchronous method for learning personal models

Processing data collected by a network of agents often boils down to sol...

Adaptive Uncertainty-Weighted ADMM for Distributed Optimization

We present AUQ-ADMM, an adaptive uncertainty-weighted consensus ADMM met...

Asynchronous Distributed Learning with Sparse Communications and Identification

In this paper, we present an asynchronous optimization algorithm for dis...