Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks

02/07/2022
by   Seyyedali Hosseinalipour, et al.
3

Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices, via iterative local updates (at devices) and global aggregations (at the server). In this paper, we develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions: (i) Network, allowing decentralized cooperation among the devices via device-to-device (D2D) communications. (ii) Heterogeneity, interpreted at three levels: (ii-a) Learning: PSL considers heterogeneous number of stochastic gradient descent iterations with different mini-batch sizes at the devices; (ii-b) Data: PSL presumes a dynamic environment with data arrival and departure, where the distributions of local datasets evolve over time, captured via a new metric for model/concept drift. (ii-c) Device: PSL considers devices with different computation and communication capabilities. (iii) Proximity, where devices have different distances to each other and the access point. PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL. Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning. We then propose network-aware dynamic model tracking to optimize the model learning vs. resource efficiency tradeoff, which we show is an NP-hard signomial programming problem. We finally solve this problem through proposing a general optimization solver. Our numerical results reveal new findings on the interdependencies between the idle times in-between the global aggregations, model/concept drift, and D2D cooperation configuration.

READ FULL TEXT

page 1

page 13

page 41

research
03/18/2021

Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model Aggregations

Federated learning has emerged as a popular technique for distributing m...
research
09/07/2021

Federated Learning Beyond the Star: Local D2D Model Consensus with Global Cluster Sampling

Federated learning has emerged as a popular technique for distributing m...
research
05/22/2023

Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization

Federated learning (FL) has emerged as a key technique for distributed m...
research
01/04/2021

Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

The conventional federated learning (FedL) architecture distributes mach...
research
06/29/2021

UAV-assisted Online Machine Learning over Multi-Tiered Networks: A Hierarchical Nested Personalized Federated Learning Approach

We consider distributed machine learning (ML) through unmanned aerial ve...
research
01/28/2020

D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge

Mobile edge learning is an emerging technique that enables distributed e...
research
05/20/2023

Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching

Current techniques and systems for distributed model training mostly ass...

Please sign up or login with your details

Forgot password? Click here to reset