Federated Over-the-Air Subspace Learning from Incomplete Data

02/28/2020
by   Praneeth Narayanamurthy, et al.
0

Federated learning refers to a distributed learning scenario in which users/nodes keep their data private but only share intermediate locally computed iterates with the master node. The master, in turn, shares a global aggregate of these iterates with all the nodes at each iteration. In this work, we consider a wireless federated learning scenario where the nodes communicate to and from the master node via a wireless channel. Current and upcoming technologies such as 5G (and beyond) will operate mostly in a non-orthogonal multiple access (NOMA) mode where transmissions from the users occupy the same bandwidth and interfere at the access point. These technologies naturally lend themselves to an "over-the-air" superposition whereby information received from the user nodes can be directly summed at the master node. However, over-the-air aggregation also means that the channel noise can corrupt the algorithm iterates at the time of aggregation at the master. This iteration noise introduces a novel set of challenges that have not been previously studied in the literature. It needs to be treated differently from the well-studied setting of noise or corruption in the dataset itself. In this work, we first study the subspace learning problem in a federated over-the-air setting. Subspace learning involves computing the subspace spanned by the top r singular vectors of a given matrix. We develop a federated over-the-air version of the power method (FedPM) and show that its iterates converge as long as (i) the channel noise is very small compared to the r-th singular value of the matrix; and (ii) the ratio between its (r+1)-th and r-th singular value is smaller than a constant less than one. The second important contribution of this work is to show how over-the-air FedPM can be used to obtain a provably accurate federated solution for subspace tracking in the presence of missing data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2019

Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data

Cooperative training methods for distributed machine learning typically ...
research
02/12/2020

Wireless Federated Learning with Local Differential Privacy

In this paper, we study the problem of federated learning (FL) over a wi...
research
02/17/2022

Time-Correlated Sparsification for Efficient Over-the-Air Model Aggregation in Wireless Federated Learning

Federated edge learning (FEEL) is a promising distributed machine learni...
research
06/09/2020

Privacy For Free: Wireless Federated Learning Via Uncoded Transmission With Adaptive Power Control

Federated Learning (FL) refers to distributed protocols that avoid direc...
research
06/27/2021

Multi-task Over-the-Air Federated Learning: A Non-Orthogonal Transmission Approach

In this letter, we propose a multi-task over-theair federated learning (...
research
05/24/2022

Federated singular value decomposition for high dimensional data

Federated learning (FL) is emerging as a privacy-aware alternative to cl...
research
08/03/2020

Cluster-Based Cooperative Digital Over-the-Air Aggregation for Wireless Federated Edge Learning

In this paper, we study a federated learning system at the wireless edge...

Please sign up or login with your details

Forgot password? Click here to reset