Stochastic Optimization from Distributed, Streaming Data in Rate-limited Networks

04/25/2017
by   Matthew Nokleby, et al.
0

Motivated by machine learning applications in networks of sensors, internet-of-things (IoT) devices, and autonomous agents, we propose techniques for distributed stochastic convex learning from high-rate data streams. The setup involves a network of nodes---each one of which has a stream of data arriving at a constant rate---that solve a stochastic convex optimization problem by collaborating with each other over rate-limited communication links. To this end, we present and analyze two algorithms---termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD)---that are based on two stochastic variants of mirror descent. The main collaborative step in the proposed algorithms is approximate averaging of the local, noisy subgradients using distributed consensus. While distributed consensus is well suited for collaborative learning, its use in our setup results in perturbed subgradient averages due to rate-limited links, which may slow down or prevent convergence. Our main contributions in this regard are: (i) bounds on the convergence rates of D-SAMD and AD-SAMD in terms of the number of nodes, network topology, and ratio of the data streaming and communication rates, and (ii) sufficient conditions for order-optimum convergence of D-SAMD and AD-SAMD. In particular, we show that there exist regimes under which AD-SAMD, when compared to D-SAMD, achieves order-optimum convergence with slower communications rates. This is in contrast to the centralized setting in which use of accelerated mirror descent results in a modest improvement over regular mirror descent for stochastic composite optimization. Finally, we demonstrate the effectiveness of the proposed algorithms using numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2020

Scaling-up Distributed Processing of Data Streams for Machine Learning

Emerging applications of machine learning in numerous areas involve cont...
research
09/15/2021

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Streaming Data

Motivated by the high-frequency data streams continuously generated, rea...
research
07/30/2018

Distributed Stochastic Optimization in Networks with Low Informational Exchange

We consider a distributed stochastic optimization problem in networks wi...
research
06/05/2018

Accelerated Randomized Coordinate Descent Methods for Stochastic Optimization and Online Learning

We propose accelerated randomized coordinate descent algorithms for stoc...
research
07/23/2022

A Dual Accelerated Method for Online Stochastic Distributed Averaging: From Consensus to Decentralized Policy Evaluation

Motivated by decentralized sensing and policy evaluation problems, we co...
research
08/10/2022

Inaccuracy rates for distributed inference over random networks with applications to social learning

This paper studies probabilistic rates of convergence for consensus+inno...
research
05/30/2018

On Consensus-Optimality Trade-offs in Collaborative Deep Learning

In distributed machine learning, where agents collaboratively learn from...

Please sign up or login with your details

Forgot password? Click here to reset