Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air

01/03/2019
by   Mohammad Mohammadi Amiri, et al.
0

We study collaborative machine learning at the wireless edge, where power and bandwidth-limited devices (workers), with limited local datasets, implement distributed stochastic gradient descent (DSGD) over-the-air with the help of a parameter server (PS). Standard approaches inherently assume separate computation and communication, where local gradient estimates are compressed and communicated to the PS over the wireless multiple access channel (MAC). Following this digital approach, we introduce D-DSGD, assuming that the workers operate on the boundary of the MAC capacity region at each iteration of the DSGD algorithm, and employ gradient quantization and error accumulation to transmit their gradient estimates within the bit budget allowed by the employed power allocation. We then introduce an analog scheme, called A-DSGD, motivated by the additive nature of the wireless MAC. In A-DSGD, the workers first sparsify their gradient estimates (with error accumulation), and then project them to a lower dimensional space imposed by the available channel bandwidth. These projections are transmitted directly over the MAC without employing any digital code. Numerical results show that A-DSGD converges much faster than D-DSGD thanks to its more efficient use of the limited bandwidth and the natural alignment of the gradient estimates over the MAC. The improvement is particularly compelling at low power and low bandwidth regimes. We also observe that the performance of A-DSGD improves with the number of workers, while D-DSGD deteriorates, limiting the ability of the latter in harnessing the computation power of too many edge devices. We highlight that the lack of quantization and channel encoding/decoding operations in A-DSGD further speeds up communication, making it very attractive for low-latency machine learning applications at the wireless network edge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2019

Federated Learning over Wireless Fading Channels

We study federated machine learning at the wireless network edge, where ...
research
07/08/2019

Collaborative Machine Learning at the Wireless Edge with Blind Transmitters

We study wireless collaborative machine learning (ML), where mobile edge...
research
02/16/2021

Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach

We consider a many-to-one wireless architecture for federated learning a...
research
10/01/2020

Machine Learning at Wireless Edge with OFDM and Low Resolution ADC and DAC

We study collaborative machine learning (ML) systems where a massive dat...
research
10/09/2019

High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

Edge machine learning involves the deployment of learning algorithms at ...
research
11/19/2022

Non-Coherent Over-the-Air Decentralized Stochastic Gradient Descent

This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) a...
research
10/14/2022

A Scalable Finite Difference Method for Deep Reinforcement Learning

Several low-bandwidth distributable black-box optimization algorithms ha...

Please sign up or login with your details

Forgot password? Click here to reset