DeepAI AI Chat
Log In Sign Up

To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices

by   Liang Li, et al.

Recent advances in machine learning, wireless communication, and mobile hardware technologies promisingly enable federated learning (FL) over massive mobile edge devices, which opens new horizons for numerous intelligent mobile applications. Despite the potential benefits, FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training, raising great challenges to battery constrained mobile devices. In this work, we target at improving the energy efficiency of FL over mobile edge networks to accommodate heterogeneous participating devices without sacrificing the learning performance. To this end, we develop a convergence-guaranteed FL algorithm enabling flexible communication compression. Guided by the derived convergence bound, we design a compression control scheme to balance the energy consumption of local computing (i.e., "working") and wireless communication (i.e., "talking") from the long-term learning perspective. In particular, the compression parameters are elaborately chosen for FL participants adapting to their computing and communication environments. Extensive simulations are conducted using various datasets to validate our theoretical analysis, and the results also demonstrate the efficacy of the proposed scheme in energy saving.


Towards Energy Efficient Federated Learning over 5G+ Mobile Devices

The continuous convergence of machine learning algorithms, 5G and beyond...

FedGreen: Federated Learning with Fine-Grained Gradient Compression for Green Mobile Edge Computing

Federated learning (FL) enables devices in mobile edge computing (MEC) t...

Energy-Aware Federated Learning with Distributed User Sampling and Multichannel ALOHA

Distributed learning on edge devices has attracted increased attention w...

Distributed Learning Meets 6G: A Communication and Computing Perspective

With the ever-improving computing capabilities and storage capacities of...

Service Delay Minimization for Federated Learning over Mobile Devices

Federated learning (FL) over mobile devices has fostered numerous intrig...

AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices

In this work, we investigate the challenging problem of on-demand federa...

Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding

Mobile devices are indispensable sources of big data. Federated learning...