Collective Online Learning via Decentralized Gaussian Processes in Massive Multi-Agent Systems

05/23/2018
by   Trong Nghia Hoang, et al.
0

Distributed machine learning (ML) is a modern computation paradigm that divides its workload into independent tasks that can be simultaneously achieved by multiple machines (i.e., agents) for better scalability. However, a typical distributed system is usually implemented with a central server that collects data statistics from multiple independent machines operating on different subsets of data to build a global analytic model. This centralized communication architecture however exposes a single choke point for operational failure and places severe bottlenecks on the server's communication and computation capacities as it has to process a growing volume of communication from a crowd of learning agents. To mitigate these bottlenecks, this paper introduces a novel Collective Online Learning Gaussian Process framework for massive distributed systems that allows each agent to build its local model, which can be exchanged and combined efficiently with others via peer-to-peer communication to converge on a global model of higher quality. Finally, our empirical results consistently demonstrate the efficiency of our framework on both synthetic and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2023

Architecting Peer-to-Peer Serverless Distributed Machine Learning Training for Improved Fault Tolerance

Distributed Machine Learning refers to the practice of training a model ...
research
10/14/2022

Communication-Efficient Topologies for Decentralized Learning with O(1) Consensus Rate

Decentralized optimization is an emerging paradigm in distributed learni...
research
06/24/2021

Factor Graphs for Heterogeneous Bayesian Decentralized Data Fusion

This paper explores the use of factor graphs as an inference and analysi...
research
06/01/2022

DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training

Personalized federated learning is proposed to handle the data heterogen...
research
09/12/2019

Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

There is a growing interest in large-scale machine learning and optimiza...
research
09/21/2018

Learning of Tree-Structured Gaussian Graphical Models on Distributed Data under Communication Constraints

In this paper, learning of tree-structured Gaussian graphical models fro...
research
07/01/2022

Distributed Influence-Augmented Local Simulators for Parallel MARL in Large Networked Systems

Due to its high sample complexity, simulation is, as of today, critical ...

Please sign up or login with your details

Forgot password? Click here to reset