User Scheduling for Federated Learning Through Over-the-Air Computation

by   Xiang Ma, et al.
Utah State University

A new machine learning (ML) technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process. FL not only reduces the communication needs but also helps to protect the local privacy. Although FL has these advantages, it can still experience large communication latency when there are massive edge devices connected to the central parameter server (PS) and/or millions of model parameters involved in the learning process. Over-the-air computation (AirComp) is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation. To achieve good performance in FL through AirComp, user scheduling plays a critical role. In this paper, we investigate and compare different user scheduling policies, which are based on various criteria such as wireless channel conditions and the significance of model updates. Receiver beamforming is applied to minimize the mean-square-error (MSE) of the distortion of function aggregation result via AirComp. Simulation results show that scheduling based on the significance of model updates has smaller fluctuations in the training process while scheduling based on channel condition has the advantage on energy efficiency.



page 1

page 2

page 3

page 4

page 5


Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks

Federated learning (FL) is a machine learning model that preserves data ...

Edge Federated Learning Via Unit-Modulus Over-The-Air Computation (Extended Version)

Edge federated learning (FL) is an emerging machine learning paradigm th...

Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning

Federated Learning (FL) is a newly emerged decentralized machine learnin...

Performance-Oriented Design for Intelligent Reflecting Surface Assisted Federated Learning

To efficiently exploit the massive raw data that is pervading generated ...

Scheduling Policy and Power Allocation for Federated Learning in NOMA Based MEC

Federated learning (FL) is a highly pursued machine learning technique t...

Scheduling of Sensor Transmissions Based on Value of Information for Summary Statistics

The optimization of Value of Information (VoI) in sensor networks integr...

Over-The-Air Computation for Distributed Machine Learning

Motivated by various applications in distributed Machine Learning (ML) i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The availability of big data makes data-driven artificial intelligent applications such as image recognition and autonomous driving ever increasingly realistic. Nowadays, advanced machine learning (ML) techniques usually comprise training and inference processes that work in a centralized manner. However, distributed devices such as smart sensors or unmanned aerial vehicles (UAVs) have massive locally generated data and need to make real-time decisions, which render it extremely difficult to transmit data for central processing through wireless channels. Thanks to the rising capacity of computation, storage, and power at edge devices, they can perform ML tasks using locally collected raw data, which can largely reduce the communication overhead and latency.

Although raw data is preserved and used locally and does not have to be uploaded to a central parameter server (PS), edge devices still need to coordinate with PS to establish the global model. A new machine learning technique named as federated learning (FL) appears to help address this issue [1]. FL keeps the collected data locally and trains the ML model on edge devices. Only model parameters are transmitted to the PS for aggregation to attain the global model through averaging. There are usually a large number of edge devices connected to the PS and all the devices contend for limited wireless bandwidth. FL only selects a small subset of edge devices for model update in each communication round [2, 3, 4]. Since the devices collect the data from their local environment, the data on different devices can be heterogeneous or non-i.i.d (independent and identically distributed). Thus it is important to select the most relevant devices for model update based on certain scheduling criteria in each round. In [4]

, three scheduling policies, i.e., random scheduling, round robin, proportional fairness in terms of probability, group, and channel condition separately are proposed. It considered the channel conditions but neglected the data distribution on different devices.

To achieve spectrum efficiency, advanced transmission techniques can be used in model parameter uploading. Non-orthogonal multiple access (NOMA) [5] allows multiple devices to share the channel and transmit data simultaneously, which reduces the aggregation latency compared with the conventional time-based scheme. NOMA users use different transmit powers and successive interference cancellation (SIC) is applied at the PS side. The authors in [6] investigated the performance of FL under NOMA with 7x performance gain without loss of accuracy. However, it doesn’t provide the security feature and the number of users that can transmit simultaneously is still limited due to the decoding at the receiver side. In [7], FL via over-the-air computation (AirComp) is presented. It employed the superposition nature of a wireless multiple-access channel to aggregate the model parameters while transmitting. PS deals with the aggregated model but not the individual, and does not have to decode the received signals like in nominal NOMA transmission. Therefore it is not only communication efficient but also computation efficient. Additionally, since PS cannot decode the received signal, it provides security features for FL as the dishonest PS cannot infer the local data with the aggregated model.

The power control for AirComp in fading channels that minimizes the computation error is presented in [8] [9]. Authors in [10] evaluated the performance of AirComp in both digital approach and analog approach. In [11], the learning rate optimization of federated learning under AirComp is explored. However, no existing work has considered the user scheduling schemes for FL under AirComp that are significant to improve FL performance.

In this work, we focus on FL via AirComp to improve both communication and computation efficiency. It employs the superposition nature of a wireless multiple-access channel so that multiple edge devices can transmit the model parameters simultaneously and PS does not need to decode the analog aggregated signals. To minimize the aggregated signal error, receiver beamforming design is applied. We explore different scheduling schemes including channel based one, model update based one, and a hybrid one based on both channel and model update.

The rest of the paper is organized as follows. Section II introduces the system model, AirComp scheme, and problem formulation. Section III presents several different user scheduling policies. Simulation results are shown in Section IV. Lastly, section V concludes the paper.

Ii System Model

We consider an AirComp system with edge devices, each with a single antenna connected to the PS that is equipped with antennas. Multiple edge devices are allowed to transmit simultaneously on the same channel. The number of edge devices participating in the model update in each communication round is limited in order to minimize the distortion error and maximize the testing model performance. Assume the maximum number of selected devices for transmitting in each round is under AirComp[7]. The main notations used in the paper are summarized in Table I.

Notation Definition
M; K; W The total number of edge devices connected to PS; the maximum number of edge devices participating FL in each round; the intermediate number of edge devices when considering both model update and channel condition
N; T; The number of antennas at PS; the total number of communication round; Selected edge device set
; ; ; Features of a data point sample on device ; corresponding label of data point; parameter set describe the mapping from to
; ;

Global loss function; local loss function; learning rate

; Dataset on user ; cardinality of the dataset
; ;

Channel vector of user

; transmitter scaling factor of user ; normalized local update at one time slot
; ; Maximum transmit power; pre-processing function of user ; post-processing function at PS
; ; Received signal vector; receiver beamforming vector; additive noise
; ;

summation result before post-processing; estimation of

; normalizing factor
TABLE I: Summary of Notations

Ii-a FL System

In FL, each edge device performs machine learning tasks using locally collected and stored data. For device , data sample has a label . Model parameters is used to capture the mappings from to

. Each device executes stochastic gradient descent (SGD) updates to minimize the loss function that describes the loss of model parameter

at sample . The loss function at device is given by


where is the local dataset on device , is the cardinality of , is the empirical loss function. The entire empirical loss function across dataset can be written as


where , is the global model parameters by averaging the aggregation result


To reduce the communication overhead, the local model update rather than the local model itself is uploaded. Thus the aggregation result can be written as


where , is defined as the local model update at devices .

Fig. 1: FL Model Update

Fig. 1 shows the FL update model.

Ii-B AirComp Scheme

AirComp performs transmission and computation simultaneously over the air. Unlike traditional orthogonal multiple access schemes, AirComp allows multiple transmission via the same channel simultaneously. It performs analog modulation and waveform superposition and no individual decoding is needed at the receiver side. Since AirComp does not decode the signal at the PS side, PS does not know the model parameters of the individual user. Thus it cannot infer local data information of individual users, providing a more secured transmission scheme.

Since the aggregation takes place during over-the-air transmission, the received signal at PS is given by


where is the channel vector between device and PS, is the transmitter scaling factor, is normalized local update at one time slot, where

with unit variance, i.e.

, is the noise vector. The transmit power constraint at device is , where is the maximum transmit power.

The target function at the PS side that is computable over-the-air can be written as , where is the pre-processing function of user , and is the post-processing function at PS side. The weighted summation of transmitted signal is


The received signal at PS is . Then the estimated value at after beamforming is


where is the normalizing factor and is the receiver beamforming vector. The distortion of with respect to the target value , which quantifies the AirComp performance, is measured by the mean-square-error (MSE) given by


We choose parameters and to minimize the MSE. Supposing the receiver beamforming vector is given, the transmitter scaling factor can be selected by using a uniform-forcing transmitter as [12]


The normalizing factor can be calculated as


Then the corresponding MES problem can be calculated as


To achieve the best performance, the following minimum mean square error is applied.


It can be formulated into a more friendly way as


Eq. (13) is a quadratically constrained quadratic programming (QCQP) problem with non-convex constraints, which is still hard to solve. In [12], the same problem can be solved by semidefinite programming (SDP), improved by successive convex approximation (SCA). After the receiver vector is solved, all other parameters can also be calculated. And the minimum MSE can be obtained.

Algorithm 1 summarizes the SDP and SCA method to optimize the receiver vector.

1:  SDP method to obtain
2:  if rank(then
4:     Set
5:     repeat
6:        SCA method solve to obtain and
7:     until criteria satisfied
8:  else
10:  end if
Algorithm 1 Receiver Optimization by SDP and SCA

Here, and , ,

is the largest eigenvalue of


is the corresponding eigenvector.

is the auxiliary variable.

Algorithm 2 summarizes the proposed FL process under AirComp settings.

  Initialization: , .
2:  for each FL update round  do
     PS sends to all users
4:     for each user in parallel do
        Calculate local gradients: .
6:     end for
     PS selects users based on scheduling algorithm.
8:     Selected users send gradients to the PS simultaneously via AirComp.
     PS samples the received signal to get aggregated model.
10:  end for
Algorithm 2 FL in AirComp

Iii User Scheduling Policies

There are usually a large number of edge devices connected to the PS. Although AirComp allows multiple users to upload their model simultaneously, the maximum number of users participating in model update in each round is normally still smaller than the total number of users [7]. Here, we consider an FL system with a total of devices connected to the PS while devices can be scheduled in each round, . We propose three user scheduling policies, one considers channel conditions from communication perspective, one considers the significance of local model update from computation perspective, and one considering both. Correspondingly the three scheduling policies are named channel based scheduling, model update based scheduling, and hybrid scheduling.

Iii-a Channel Based Scheduling

Channel based scheduling selects users that have the highest channel gains, i.e.,


here, is the -norm channel gain of device . Before scheduling, each client needs to send a small amount of information to PS so that the PS can perform channel estimation. Compared with the model gradient transmission, the time to transmit this small amount of information can be safely ignored.

Since multiple antennas are equipped in PS, channel gain is in a vector form. From Eq. (11), a larger channel gain results in a smaller MSE when other parameters are fixed.

In this scheduling scheme, users can start local computation until they are selected. Thus, energy-constrained edge devices such as IoT devices can be more power efficient.

Iii-B Model Update Based Scheduling

This scheduling scheme considers the significance of the model update as the user selection criteria. -norm is used to evaluate the significance of model update. Edge device , , first computes the model update and then sends its -norm of model update to the PS. Then PS selects devices with the largest value, that is


This scheme requires all the users to perform local computation and send their -norm of model update to the PS. It causes energy dissipation for the unselected devices and the transmission of model update for all the users can also cause channel congestion. For devices with low computation abilities, it may take a long time for them to finish the local computation and upload their model update. The stragglers will reduce system performance.

Iii-C Hybrid Scheduling

Channel gain and the significance of model update can both affect the performance of FL. Thus both are considered in the hybrid scheduling. PS first selects devices with the highest channel gains and then selects devices with the largest model update from devices, . In this strategy, the energy of unselected devices can be saved since only selected devices need to perform local computation.

Channel based scheduling can help reduce computation needs at local devices while model update based scheduling can help improve the FL training performance. Hybrid scheduling intends to balance the tradeoff between the two.

Iii-D Complexity Analysis

For each client, supposing the computation time to finish the ML task is , the communication time for PS channel estimation is and the communication time to upload model gradients is . The corresponding time complexity is summarized in Table II.

Channel Based Scheduling Model update Based Scheduling Hybrid Scheduling
TABLE II: Complexity Analysis

Iv Simulation Results

In this section, we present the performance of federated learning under AirComp with different user scheduling schemes. The channel parameters are given as follows. There are

users uniformly distributed in a disk region with a cell size of

m. The transmit signal to noise ratio

is fixed at dB, channel path loss exponent is . The number of antennas at PS is . In each communication round, the channel vector keeps constant for the same user while it varies across different users and/or different communication rounds. We further have and

. The learning task is trained by using the MNIST (Modified National Institute of Standards and Technology) dataset


with a fully connected neural network called LeNet-300-100, where the first hidden layer consists of

neurons and the second layer consists of

neurons. The hyperparameters are summarized in Table

III. The learning stages are divided into two phases, namely training phase and testing phase. Similarly, the dataset also split into two parts, of them are training set and the rest are the testing set. The testing accuracy is used to evaluate the learning performance. To make the proposed scheduling schemes more convincing, non-i.i.d data [14] is used here, i.e., every user has a varying data size and distribution.

rate size ()
size ()
Round ()
set size
set size
0.01 10 60 90% 10%
TABLE III: Hyperparameters
Fig. 2: Channel Based Scheduling

Fig. 2 shows the testing accuracy for the channel based scheduling. The random channel scheduling selects the client with different channel conditions in a uniform distribution. Compared with random channel scheduling, channel based scheduling achieves a much higher testing accuracy during the updating process but it experiences much larger fluctuations. This is because the MSE (defined in equation (8)) achieved by the channel based scheduling is much smaller than that achieved by the random scheduling and non-i.i.d data causes the testing accuracy to drop in some rounds due to the inconsistency of the data updated. For random channel scheduling, the testing accuracy experiences smaller fluctuations because the impact from channel conditions plays down the impact from the non-i.i.d data distribution.

Fig. 3: Model Update Based Scheduling

Fig. 3 gives the testing results when users with the largest model update values are selected. Compared with the scheduling that randomly selects model updates, the testing results of scheduling that selects the largest model updates are much smoother and the testing results of random scheduling are quite close to the model update based scheduling. In FL the gradients rather than the model parameters are uploaded here. As most of the gradient values are close to , there is no much difference between model update based scheduling and random scheduling [15].

Fig. 4: Hybrid Scheduling

In Fig. 4, three scheduling schemes are compared. The model update based scheduling makes the testing result smoother while the channel based scheduling shows the lowest testing accuracy. The hybrid scheduling achieves the performance that falls in between the above two schemes. In model update based scheduling, all edge devices perform local computing, and devices with the largest model update values are scheduled for uploading. Thus the dataset is not non-i.i.d and the testing accuracy curve is quite smooth. However, the model update based scheduling consumes more computations than the other two scheduling policies since all simulated edge devices need to perform local computing for the ML tasks. And local devices tend to consume energy for computing. Hybrid scheduling gives a good trade-off between the testing accuracy performance and local device energy consumption.

V Conclusion

AirComp based FL is not only communication efficient by allowing multiple devices to transmit simultaneously but also computation efficient since FL server only needs to have aggregated model parameters rather than the individual model parameter. To further investigate the AirComp based FL performance, in this paper, we proposed three different user scheduling policies, i.e., channel based scheduling, the model update based scheduling, and a hybrid scheduling that consider both channel conditions and model update priorities. Simulation results show that the channel based scheduling has the least device computation needs but gives the lowest testing accuracy performance while the model update based scheduling gives the best testing accuracy result but has the highest computation needs. The hybrid scheduling basically gives the performance trade-off between the two.


This work was supported by the National Science Foundation under the grants NSF CNS-2007995 and EEC-1941524.