A Model Parallel Proximal Stochastic Gradient Algorithm for Partially Asynchronous Systems

10/19/2018
by   Rui Zhu, et al.
0

Large models are prevalent in modern machine learning scenarios, including deep learning, recommender systems, etc., which can have millions or even billions of parameters. Parallel algorithms have become an essential solution technique to many large-scale machine learning jobs. In this paper, we propose a model parallel proximal stochastic gradient algorithm, AsyB-ProxSGD, to deal with large models using model parallel blockwise updates while in the meantime handling a large amount of training data using proximal stochastic gradient descent (ProxSGD). In our algorithm, worker nodes communicate with the parameter servers asynchronously, and each worker performs proximal stochastic gradient for only one block of model parameters during each iteration. Our proposed algorithm generalizes ProxSGD to the asynchronous and model parallel setting. We prove that AsyB-ProxSGD achieves a convergence rate of O(1/√(K)) to stationary points for nonconvex problems under constant minibatch sizes, where K is the total number of block updates. This rate matches the best-known rates of convergence for a wide range of gradient-like algorithms. Furthermore, we show that when the number of workers is bounded by O(K^1/4), we can expect AsyB-ProxSGD to achieve linear speedup as the number of workers increases. We implement the proposed algorithm on MXNet and demonstrate its convergence behavior and near-linear speedup on a real-world dataset involving both a large model size and large amounts of data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2018

Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization

We study stochastic algorithms for solving non-convex optimization probl...
research
06/27/2015

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

Asynchronous parallel implementations of stochastic gradient (SG) have b...
research
05/21/2016

Make Workers Work Harder: Decoupled Asynchronous Proximal Stochastic Gradient Descent

Asynchronous parallel optimization algorithms for solving large-scale ma...
research
02/21/2020

Debiasing Stochastic Gradient Descent to handle missing values

A major caveat of large scale data is their incom-pleteness. We propose ...
research
02/24/2018

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

Many machine learning models, including those with non-smooth regularize...
research
07/20/2017

Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization

Due to their simplicity and excellent performance, parallel asynchronous...
research
11/03/2020

IMEX Parareal Integrators

Parareal is a widely studied parallel-in-time method that can achieve me...

Please sign up or login with your details

Forgot password? Click here to reset