Federated Multi-Task Learning

05/30/2017
by   Virginia Smith, et al.
0

Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2019

Variational Federated Multi-Task Learning

In classical federated learning a central server coordinates the trainin...
research
04/19/2020

Data Poisoning Attacks on Federated Machine Learning

Federated machine learning which enables resource constrained node devic...
research
12/08/2020

Federated Multi-Task Learning for Competing Constraints

In addition to accuracy, fairness and robustness are two critical concer...
research
02/18/2021

Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning

One of the main challenges in real-world reinforcement learning is to le...
research
04/07/2022

Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation

The widespread application of artificial intelligence in health research...
research
06/01/2023

RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning

The rapid development of artificial intelligence (AI) over massive appli...
research
04/02/2020

Distributed Primal-Dual Optimization for Online Multi-Task Learning

Conventional online multi-task learning algorithms suffer from two criti...

Please sign up or login with your details

Forgot password? Click here to reset