Revisiting Analog Over-the-Air Machine Learning: The Blessing and Curse of Interference

07/25/2021
by   Howard H. Yang, et al.
0

We study a distributed machine learning problem carried out by an edge server and multiple agents in a wireless network. The objective is to minimize a global function that is a sum of the agents' local loss functions. And the optimization is conducted by analog over-the-air model training. Specifically, each agent modulates its local gradient onto a set of waveforms and transmits to the edge server simultaneously. From the received analog signal the edge server extracts a noisy aggregated gradient which is distorted by the channel fading and interference, and uses it to update the global model and feedbacks to all the agents for another round of local computing. Since the electromagnetic interference generally exhibits a heavy-tailed intrinsic, we use the α-stable distribution to model its statistic. In consequence, the global gradient has an infinite variance that hinders the use of conventional techniques for convergence analysis that rely on second-order moments' existence. To circumvent this challenge, we take a new route to establish the analysis of convergence rate, as well as generalization error, of the algorithm. Our analyses reveal a two-sided effect of the interference on the overall training procedure. On the negative side, heavy tail noise slows down the convergence rate of the model training: the heavier the tail in the distribution of interference, the slower the algorithm converges. On the positive side, heavy tail noise has the potential to increase the generalization power of the trained model: the heavier the tail, the better the model generalizes. This perhaps counterintuitive conclusion implies that the prevailing thinking on interference – that it is only detrimental to the edge learning system – is outdated and we shall seek new techniques that exploit, rather than simply mitigate, the interference for better machine learning in wireless networks.

READ FULL TEXT
research
04/15/2022

Server Free Wireless Federated Learning: Architecture, Algorithm, and Analysis

We demonstrate that merely analog transmissions and match filtering can ...
research
01/16/2020

One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis

Federated edge learning (FEEL) is a popular framework for model training...
research
08/20/2019

On Analog Gradient Descent Learning over Multiple Access Fading Channels

We consider a distributed learning problem over multiple access channel ...
research
07/26/2021

Accelerated Gradient Descent Learning over Multiple Access Fading Channels

We consider a distributed learning problem in a wireless network, consis...
research
02/20/2021

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance

Recent studies have provided both empirical and theoretical evidence ill...
research
06/17/2023

Edge Intelligence Over the Air: Two Faces of Interference in Federated Learning

Federated edge learning is envisioned as the bedrock of enabling intelli...
research
12/22/2022

Nonlinear consensus+innovations under correlated heavy-tailed noises: Mean square convergence rate and asymptotics

We consider distributed recursive estimation of consensus+innovations ty...

Please sign up or login with your details

Forgot password? Click here to reset