A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems

03/16/2019
by   Jineng Ren, et al.
0

This paper proposes and analyzes a communication-efficient distributed optimization framework for general nonconvex nonsmooth signal processing and machine learning problems under an asynchronous protocol. At each iteration, worker machines compute gradients of a known empirical loss function using their own local data, and a master machine solves a related minimization problem to update the current estimate. We prove that for nonconvex nonsmooth problems, the proposed algorithm converges with a sublinear rate over the number of communication rounds, coinciding with the best theoretical rate that can be achieved for this class of problems. Linear convergence is established without any statistical assumptions of the local data for problems characterized by composite loss functions whose smooth parts are strongly convex. Extensive numerical experiments verify that the performance of the proposed approach indeed improves -- sometimes significantly -- over other state-of-the-art algorithms in terms of total communication efficiency.

READ FULL TEXT
research
03/28/2018

ASY-SONATA: Achieving Geometric Convergence for Distributed Asynchronous Optimization

Can one obtain a geometrically convergent algorithm for distributed asyn...
research
10/17/2018

A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems

In this paper, we focus on solving an important class of nonconvex optim...
research
09/02/2017

Communication-efficient Algorithm for Distributed Sparse Learning via Two-way Truncation

We propose a communicationally and computationally efficient algorithm f...
research
12/10/2018

Asynchronous Distributed Learning with Sparse Communications and Identification

In this paper, we present an asynchronous optimization algorithm for dis...
research
07/21/2023

Robust Fully-Asynchronous Methods for Distributed Training over General Architecture

Perfect synchronization in distributed machine learning problems is inef...
research
09/22/2020

Improving Convergence for Nonconvex Composite Programming

High-dimensional nonconvex problems are popular in today's machine learn...
research
03/26/2018

DJAM: distributed Jacobi asynchronous method for learning personal models

Processing data collected by a network of agents often boils down to sol...

Please sign up or login with your details

Forgot password? Click here to reset