Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

02/29/2020
by   Xiaopeng Mo, et al.
0

This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML parameters aggregation and the computation resource allocation for locally updating MLparameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML parameters to edge server, based on the non orthogonal multiple access and time division multiple access, respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading MLparameters and their central processing unit frequencies for local update. We propose efficient algorithms to optimally solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly improves the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

Threshold-Based Data Exclusion Approach for Energy-Efficient Federated Edge Learning

Federated edge learning (FEEL) is a promising distributed learning techn...
research
04/03/2019

Sequencing and Scheduling for Multi-User Machine-Type Communication

In this paper, we propose joint sequencing and scheduling optimization f...
research
01/28/2020

D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge

Mobile edge learning is an emerging technique that enables distributed e...
research
11/22/2020

Wireless Distributed Edge Learning: How Many Edge Devices Do We Need?

We consider distributed machine learning at the wireless edge, where a p...
research
02/24/2021

Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer

Federated edge learning (FEEL) is a widely adopted framework for trainin...
research
06/11/2019

Optimizing Pipelined Computation and Communication for Latency-Constrained Edge Learning

Consider a device that is connected to an edge processor via a communica...
research
06/20/2021

Fine-Grained Data Selection for Improved Energy Efficiency of Federated Edge Learning

In Federated edge learning (FEEL), energy-constrained devices at the net...

Please sign up or login with your details

Forgot password? Click here to reset