Wireless Distributed Edge Learning: How Many Edge Devices Do We Need?

11/22/2020
by   Jaeyoung Song, et al.
0

We consider distributed machine learning at the wireless edge, where a parameter server builds a global model with the help of multiple wireless edge devices that perform computations on local dataset partitions. Edge devices transmit the result of their computations (updates of current global model) to the server using a fixed rate and orthogonal multiple access over an error prone wireless channel. In case of a transmission error, the undelivered packet is retransmitted until successfully decoded at the receiver. Leveraging on the fundamental tradeoff between computation and communication in distributed systems, our aim is to derive how many edge devices are needed to minimize the average completion time while guaranteeing convergence. We provide upper and lower bounds for the average completion and we find a necessary condition for adding edge devices in two asymptotic regimes, namely the large dataset and the high accuracy regime. Conducted experiments on real datasets and numerical results confirm our analysis and substantiate our claim that the number of edge devices should be carefully selected for timely distributed edge learning.

READ FULL TEXT
research
10/19/2020

Blind Federated Edge Learning

We study federated edge learning (FEEL), where wireless edge devices, ea...
research
02/29/2020

Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

This paper studies a federated edge learning system, in which an edge se...
research
01/16/2020

One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis

Federated edge learning (FEEL) is a popular framework for model training...
research
05/25/2022

Over-the-Air Federated Learning with Energy Harvesting Devices

We consider federated edge learning (FEEL) among mobile devices that har...
research
02/28/2023

On-the-Fly Communication-and-Computing for Distributed Tensor Decomposition Over MIMO Channels

Distributed tensor decomposition (DTD) is a fundamental data-analytics t...
research
01/20/2021

DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling

To reduce uploading bandwidth and address privacy concerns, deep learnin...
research
08/08/2023

Federated Inference with Reliable Uncertainty Quantification over Wireless Channels via Conformal Prediction

Consider a setting in which devices and a server share a pre-trained mod...

Please sign up or login with your details

Forgot password? Click here to reset