Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

01/04/2021
by   Su Wang, et al.
0

The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server. FedL ignores two important characteristics of contemporary wireless networks, however: (i) the network may contain heterogeneous communication/computation resources, while (ii) there may be significant overlaps in devices' local data distributions. In this work, we develop a novel optimization methodology that jointly accounts for these factors via intelligent device sampling complemented by device-to-device (D2D) offloading. Our optimization aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy subject to realistic constraints on the network topology and device capabilities. Theoretical analysis of the D2D offloading subproblem leads to new FedL convergence bounds and an efficient sequential convex optimizer. Using this result, we develop a sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy. Through evaluation on real-world datasets and network measurements from our IoT testbed, we find that our methodology while sampling less than 5 devices outperforms conventional FedL substantially both in terms of trained model accuracy and required resource utilization.

READ FULL TEXT

page 1

page 6

page 8

research
03/18/2021

Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model Aggregations

Federated learning has emerged as a popular technique for distributing m...
research
09/07/2021

Federated Learning Beyond the Star: Local D2D Model Consensus with Global Cluster Sampling

Federated learning has emerged as a popular technique for distributing m...
research
06/07/2020

From Federated Learning to Fog Learning: Towards Large-Scale Distributed Machine Learning in Heterogeneous Wireless Networks

Contemporary network architectures are pushing computing tasks from the ...
research
08/04/2022

Embedding Alignment for Unsupervised Federated Learning via Smart Data Exchange

Federated learning (FL) has been recognized as one of the most promising...
research
02/07/2022

Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks

Federated learning (FedL) has emerged as a popular technique for distrib...
research
04/17/2020

Network-Aware Optimization of Distributed Learning for Fog Computing

Fog computing promises to enable machine learning tasks to scale to larg...
research
11/26/2021

Dynamic Network-Assisted D2D-Aided Coded Distributed Learning

Today, various machine learning (ML) applications offer continuous data ...

Please sign up or login with your details

Forgot password? Click here to reset