Multi-Stage Hybrid Federated Learning over Large-Scale Wireless Fog Networks

07/18/2020
by   Seyyedali Hosseinalipour, et al.
0

One of the popular methods for distributed machine learning (ML) is federated learning, in which devices train local models based on their datasets, which are in turn aggregated periodically by a server. In large-scale fog networks, the "star" learning topology of federated learning poses several challenges in terms of resource utilization. We develop multi-stage hybrid model training (MH-MT), a novel learning methodology for distributed ML in these scenarios. Leveraging the hierarchical structure of fog systems, MH-MT combines multi-stage parameter relaying with distributed consensus formation among devices in a hybrid learning paradigm across network layers. We theoretically derive the convergence bound of MH-MT with respect to the network topology, ML model, and algorithm parameters such as the rounds of consensus employed in different clusters of devices. We obtain a set of policies for the number of consensus rounds at different clusters to guarantee either a finite optimality gap or convergence to the global optimum. Subsequently, we develop an adaptive distributed control algorithm for MH-MT to tune the number of consensus rounds at each cluster of local devices over time to meet convergence criteria. Our numerical experiments validate the performance of MH-MT in terms of convergence speed and resource utilization.

READ FULL TEXT
03/18/2021

Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model Aggregations

Federated learning has emerged as a popular technique for distributing m...
09/07/2021

Federated Learning Beyond the Star: Local D2D Model Consensus with Global Cluster Sampling

Federated learning has emerged as a popular technique for distributing m...
04/07/2022

Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

A recent emphasis of distributed learning research has been on federated...
07/04/2021

FedFog: Network-Aware Optimization of Federated Learning over Wireless Fog-Cloud Systems

Federated learning (FL) is capable of performing large distributed machi...
04/17/2020

Network-Aware Optimization of Distributed Learning for Fog Computing

Fog computing promises to enable machine learning tasks to scale to larg...
03/24/2021

The Gradient Convergence Bound of Federated Multi-Agent Reinforcement Learning with Efficient Communication

The paper considers a distributed version of deep reinforcement learning...