Communication-Efficient and Decentralized Multi-Task Boosting while Learning the Collaboration Graph

01/24/2019
by   Valentina Zantedeschi, et al.
0

We study the decentralized machine learning scenario where many users collaborate to learn personalized models based on (i) their local datasets and (ii) a similarity graph over the users' learning tasks. Our approach trains nonlinear classifiers in a multi-task boosting manner without exchanging personal data and with low communication costs. When background knowledge about task similarities is not available, we propose to jointly learn the personalized models and a sparse collaboration graph through an alternating optimization procedure. We analyze the convergence rate, memory consumption and communication complexity of our decentralized algorithms, and demonstrate the benefits of our approach compared to competing techniques on synthetic and real datasets.

READ FULL TEXT
research
12/21/2022

Personalized Decentralized Multi-Task Learning Over Dynamic Communication Graphs

Decentralized and federated learning algorithms face data heterogeneity ...
research
04/25/2019

Decentralized Multi-Task Learning Based on Extreme Learning Machines

In multi-task learning (MTL), related tasks learn jointly to improve gen...
research
02/10/2022

Adaptive and Robust Multi-task Learning

We study the multi-task learning problem that aims to simultaneously ana...
research
03/24/2022

FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task Learning

Multi-task learning (MTL) is a novel framework to learn several tasks si...
research
11/09/2020

BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization

This article deals with the problem of distributed machine learning, in ...
research
05/27/2022

A Decentralized Collaborative Learning Framework Across Heterogeneous Devices for Personalized Predictive Analytics

In this paper, we propose a Similarity-based Decentralized Knowledge Dis...
research
06/16/2023

Structured Cooperative Learning with Graphical Model Priors

We study how to train personalized models for different tasks on decentr...

Please sign up or login with your details

Forgot password? Click here to reset