Multi-Stage Hybrid Federated Learning over Large-Scale Wireless Fog Networks

07/18/2020
by   Seyyedali Hosseinalipour, et al.
0

One of the popular methods for distributed machine learning (ML) is federated learning, in which devices train local models based on their datasets, which are in turn aggregated periodically by a server. In large-scale fog networks, the "star" learning topology of federated learning poses several challenges in terms of resource utilization. We develop multi-stage hybrid model training (MH-MT), a novel learning methodology for distributed ML in these scenarios. Leveraging the hierarchical structure of fog systems, MH-MT combines multi-stage parameter relaying with distributed consensus formation among devices in a hybrid learning paradigm across network layers. We theoretically derive the convergence bound of MH-MT with respect to the network topology, ML model, and algorithm parameters such as the rounds of consensus employed in different clusters of devices. We obtain a set of policies for the number of consensus rounds at different clusters to guarantee either a finite optimality gap or convergence to the global optimum. Subsequently, we develop an adaptive distributed control algorithm for MH-MT to tune the number of consensus rounds at each cluster of local devices over time to meet convergence criteria. Our numerical experiments validate the performance of MH-MT in terms of convergence speed and resource utilization.

READ FULL TEXT
research
03/18/2021

Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model Aggregations

Federated learning has emerged as a popular technique for distributing m...
research
06/07/2020

From Federated Learning to Fog Learning: Towards Large-Scale Distributed Machine Learning in Heterogeneous Wireless Networks

Contemporary network architectures are pushing computing tasks from the ...
research
09/07/2021

Federated Learning Beyond the Star: Local D2D Model Consensus with Global Cluster Sampling

Federated learning has emerged as a popular technique for distributing m...
research
11/23/2022

Event-Triggered Decentralized Federated Learning over Resource-Constrained Edge Devices

Federated learning (FL) is a technique for distributed machine learning ...
research
04/07/2022

Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

A recent emphasis of distributed learning research has been on federated...
research
04/17/2020

Network-Aware Optimization of Distributed Learning for Fog Computing

Fog computing promises to enable machine learning tasks to scale to larg...
research
05/09/2023

Self-Evolving Integrated VHetNets for 6G: A Multi-Tier HFL Approach

Self-evolving networks (SENs) are emerging technologies that dynamically...

Please sign up or login with your details

Forgot password? Click here to reset