Online Distributed Learning with Quantized Finite-Time Coordination

07/13/2023
by   Nicola Bastianello, et al.
0

In this paper we consider online distributed learning problems. Online distributed learning refers to the process of training learning models on distributed data sources. In our setting a set of agents need to cooperatively train a learning model from streaming data. Differently from federated learning, the proposed approach does not rely on a central server but only on peer-to-peer communications among the agents. This approach is often used in scenarios where data cannot be moved to a centralized location due to privacy, security, or cost reasons. In order to overcome the absence of a central server, we propose a distributed algorithm that relies on a quantized, finite-time coordination protocol to aggregate the locally trained models. Furthermore, our algorithm allows for the use of stochastic gradients during local training. Stochastic gradients are computed using a randomly sampled subset of the local training data, which makes the proposed algorithm more efficient and scalable than traditional gradient descent. In our paper, we analyze the performance of the proposed algorithm in terms of the mean distance from the online solution. Finally, we present numerical results for a logistic regression task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

Bayesian Federated Learning over Wireless Networks

Federated learning is a privacy-preserving and distributed training meth...
research
06/09/2023

Communication-Efficient Zeroth-Order Distributed Online Optimization: Algorithm, Theory, and Applications

This paper focuses on a multi-agent zeroth-order online optimization pro...
research
08/19/2021

On Accelerating Distributed Convex Optimizations

This paper studies a distributed multi-agent convex optimization problem...
research
04/07/2021

Optimal CPU Scheduling in Data Centers via a Finite-Time Distributed Quantized Coordination Mechanism

In this paper we analyze the problem of optimal task scheduling for data...
research
09/17/2019

Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients

The present paper develops a novel aggregated gradient approach for dist...
research
12/05/2022

Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

This study considers a federated learning setup where cost-sensitive and...
research
04/14/2018

When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning

Emerging technologies and applications including Internet of Things (IoT...

Please sign up or login with your details

Forgot password? Click here to reset