Privacy-preserving Traffic Flow Prediction: A Federated Learning Approach

03/19/2020 ∙ by YI LIU, et al. ∙ IEEE Nanyang Technological University 0

Existing traffic flow forecasting approaches by deep learning models achieve excellent success based on a large volume of datasets gathered by governments and organizations. However, these datasets may contain lots of user's private data, which is challenging the current prediction approaches as user privacy is calling for the public concern in recent years. Therefore, how to develop accurate traffic prediction while preserving privacy is a significant problem to be solved, and there is a trade-off between these two objectives. To address this challenge, we introduce a privacy-preserving machine learning technique named federated learning and propose a Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction. FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism rather than directly sharing raw data among organizations. In the secure parameter aggregation mechanism, we adopt a Federated Averaging algorithm to reduce the communication overhead during the model parameter transmission process. Furthermore, we design a Joint Announcement Protocol to improve the scalability of FedGRU. We also propose an ensemble clustering-based scheme for traffic flow prediction by grouping the organizations into clusters before applying FedGRU algorithm. Through extensive case studies on a real-world dataset, it is shown that FedGRU's prediction accuracy is 90.96 learning models, which confirm that FedGRU can achieve accurate and timely traffic prediction without compromising the privacy and security of raw data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Contemporary urban residents, taxi drivers, business sectors, and government agencies have a strong need of accurate and timely traffic flow information [ref-1] as these road users can utilize such information to alleviate traffic congestion, control traffic light appropriately, and improve the efficiency of traffic operations [ref-2, ref-66, ref-67]. Traffic flow information can also be used by people to develop better travelling plans. Traffic Flow Prediction (TFP) is to provide such traffic flow information by using historical traffic flow data to predict future traffic flow. TFP is regarded as a critical element for the successful deployment of Intelligent Transportation System (ITS) subsystems, particularly the advanced traveler information, online car-hailing, and traffic management systems.

In TFP, centralized machine learning methods are typically utilized to predict traffic flow by training with sufficient sensor data, e.g., from mobile phones, cameras, radars, etc. For example, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and their variants have achieved gratifying results in predicting traffic flow in the literature. Such learning methods typically collaboratively require sharing data among public agencies and private companies. Indeed, in recent years, the general public witnessed partnerships among public agencies and mobile service providers such as DiDi Chuxing, Uber, and Hellobike. These partnerships extend the capability and services of companies that provide real-time traffic flow forecasting, traffic management, car sharing, and personal travel applications

[ref-64].

Nonetheless, it is often overlooked that the data may contain sensitive private information, which leads to potential privacy leakage. As shown in Fig. 1, there are some privacy issues in the traffic flow prediction context. For example, road surveillance cameras capture vehicle license plate information when monitoring traffic flow, which may leak user private information [ref-61] When different organizations use data collected by sensors to predict traffic flow, the collected data is stored in different clouds and should not be exchanged for privacy preservation. These make it challenging to train an effective model with this valuable data. While the assumption is widely made in the literature, the acquisition of massive user data is not possible in real applications respecting privacy. Furthermore, Tesla Motors leaked the vehicle’s location information when using the vehicle’s GPS data to achieve traffic prediction, which would cause many security risks to the owner of the vehicle111https://www.anquanke.com/post/id/197750. In EU, Cooperative ITS (C-ITS)222https://new.qq.com/omn/20180411/20180411A1W9FI.html service providers must provide clear terms to end-users in a concise and accessible form so that users can agree to the processing of their personal data [ref-63]. Therefore, it is important to protect privacy while predicting traffic flow.

Fig. 1: Privacy and security problems in traffic flow prediction.

To predict traffic flow in ITS without compromising privacy, reference [ref-7] introduced a privacy control mechanism based on “-anonymous diffusion,” which can complete taxi order scheduling without leaking user privacy. Le Ny et al.

proposed a differentially private real-time traffic state estimator system to predict traffic flow in

[ref-43]. However, these privacy-preserving methods cannot achieve the trade-off between accuracy and privacy, rendering subpar performance. Therefore, we need to seek an effective method to accurately predict traffic flow under the constraint of privacy protection.

To address the data privacy leakage issue, we incorporate a privacy-preserving machine learning technique named Federated Learning (FL) [ref-16] for TFP in this work. In FL, distributed organizations cooperatively train a globally shared model through their local data without exchanging the raw data. To accurately predict traffic flow, we propose an enhanced federated learning algorithm with a Gated Recurrent Unit neural network (FedGRU) in this paper, where GRU is an advanced time series prediction model that can be used to predict traffic flow. Through FL and its aggregation mechanism [ref-11], FedGRU aggregates model parameters from different geographically located organizations to build a global deep learning model under privacy well-preserved conditions. Furthermore, contributed by the outstanding data regression capability of GRU neural networks, FedGRU can achieve accurate and timely traffic flow prediction for different organizations. The major contributions of this paper are summarized as follows:

  • Unlike existing algorithms, we propose a novel privacy-preserving algorithm that integrates emerging federated learning with a practical GRU neural network for traffic flow prediction. Such an algorithm provides reliable data privacy preservation through a locally training model without raw data exchange.

  • To improve the scalability and scalabiligy of federated learning in traffic flow prediction, we design an improved Federated Averaging (FedAVG) algorithm with a Joint-Announcement protocol in the aggregation mechanism. This protocol uses random sub-sampling for participating organizations to reduce the communication overhead of the algorithm, which is particularly suitable for large-scale and distribution prediction.

  • Based on FedGRU algorithm, we develop an ensemble clustering-based FedGRU scheme to integrate the optimal global model and capture the spatio-temporal correlation of traffic flow data, thereby further improving the prediction accuracy.

  • We conduct extensive experiments on a real-world dataset to demonstrate the performance of the proposed schemes for traffic flow prediction compared to non-federated learning methods.

The remainder of this paper is organized as follows. Section II reviews the literature on short-term TFP and privacy research in ITS. Section III defines the Centralized TFP Learning problem and Federated TFP Learning problem, and proposes a security parameter aggregation mechanism. Section IV presents FedGRU algorithm and ensemble clustering-based FedGRU algorithm. In FedGRU, we introduce FedAVG algorithm, Joint-Announcement Protocol in detail. Section V and Section V-F discusse the experimental results. Concluding remarks are described in Section VI.

Ii Related Work

Ii-a Traffic Flow Prediction

Traffic flow forecasting has always been a hot issue in ITS, which serves as functions of real-time traffic control and urban planning. Although researchers have proposed many new methods, they can generally be divided into two categories: parametric models and non-parametric models.

Ii-A1 Parametric models

Parametric models predict future data by capturing existing data feature within its parameters. M. S. Ahmed et al. in [ref-24] proposed the Autoregressive Integrated Moving Average (ARIMA) model in the 1970s to predict short-term freeway traffic. Since then, many researchers have proposed variants of ARIMA such as Kohonen-ARIMA (KARIMA) [ref-25], subset ARIMA [ref-26], seasonal ARIMA [ref-27], etc. These models further improve the accuracy of TFP by focusing on the statistical correlation of the data. Parametric models have several advantages. First of all, such parametric models are highly transparent and interpreted for easy human understanding. Second, these solutions usually take less time than non-parametric models. However, these solutions suffer from low model express ability, rarely solutions to achieve accurate and timely TFP.

Ii-A2 Non-parametric models

With the improvement of data storage and computing, non-parametric models have achieved great success in TFP [ref-28]. Davis and Nihan et al. in [ref-29] proposed -NN model for short-term traffic flow prediction. Lv et al. in [ref-1]

first applied the stacked autoencoder (SAE) model to TFP. Furthermore, SAE adopts a hierarchical greedy network structure to learn non-linear features and has better performance than Support Vector Machines (SVM)

[ref-30]

and Feed-forward Neural Network (FFNN)

[ref-31]. Considering the timing of the data, Ma et al. in [ref-32] and Tian et al. in [ref-33]

applied Long Short-Term Memory (LSTM) to achieve accurate and timely TFP. Fu

et al.in [ref-15] first proposed GRU neural network methods for TFP. In recent years, due to the success of convolutional networks and graph networks, Yu et al. in [ref-35, ref-34] proposed graph convolutional generative autoencoder to address the real-time traffic speed estimation problem.

Ii-B Privacy Issues for Intelligent Transportation Systems

In ITS, many models and methods rely on training data from users or organizations. However, with the increasing privacy awareness, direct data exchange among users and organizations is not permitted by law. Matchi et al. in [ref-36] developed privacy-preserving service to compute meeting points in ride-sharing based on secure multi-party computing, so that each user remains in control of his location data. Brian et al. [ref-37] designed a data sharing algorithm based on information-theoretic -anonymity. This data sharing algorithm implements secure data sharing by -anonymity encryption of the data. The authors in [ref-55] proposed a privacy-preserving transportation traffic measurement scheme for cyber-physical road systems by using maximum-likelihood estimation (MLE) to obtain the prediction result. Reference [ref-56] presents a system based on virtual trip lines and an associated cloaking technique to achieve privacy-protected traffic flow monitoring. To avoid leaking the location of vehicles while monitoring traffic flow, [ref-57] proposed a privacy-preserving data gathering scheme by using encrypt methods. Nevertheless, these approaches have two problems: 1) they respects privacy at the expense of accuracy; 2) they cannot properly handle a large amount of data within limit time [ref-10]. Besides, the EU has promulgated General Data Protection Regulation (GDPR), which means that as long as the organization has the possibility of revealing privacy in the data sharing process, the data transaction violates the law [ref-65]. Therefore, we need to develop new methods to adapt to the general public with a growing sense of privacy.

In recent years, Federated Learning (FL) models have been used to analyze private data because of its privacy-preserving features. FL is to build machine-learning models based on datasets that are distributed across multiple devices while preventing data leakage [ref-42]. Bonawitz et al. in[ref-38] first applied FL to decentralized learning of mobile phone devices without compromising privacy. To ensure the confidentiality of the user’s local gradient during the federated learning process, the author in [ref-58] proposed VerifyNet, which is the first framework to protect privacy and verifiable federated learning. Reference [ref-59] used a multi-weight subjective logic model to design a reputation-based device selection scheme for reliable federated learning. The authors in [ref-60] applied the federated learning framework to edge computing to achieve privacy-protected data analysis. Nishio et al. in[ref-39] applied FL to mobile edge computing and proposed the FedCS protocol to reduce the time of the training process. Chen et al. in [ref-40]

combined transfer learning

[ref-41] with FL to propose FedHealth to be applied to healthcare. Yang et al. in [ref-42] introduced Federated Machine Learning, which can be applied to multiple applications in smart cities such as energy demand forecasting.

Although researchers have proposed some privacy-preserving methods to predict traffic flow, they do not comply with the requirements of GDPR. In this paper, we explore a privacy-preserving method FL with GRU for traffic flow prediction.

Iii Problem Definition

We use the term “organization” throughout the paper to describe entities in TFP, such as urban agencies, private companies, and detector stations. We use the term “client” to describe computing nodes that correspond to one or multiple sensors in FL and use the term “device” to describe the sensor in the organizations. Let and denote the client set and organization set in ITS, respectively. In the context of traffic flow prediction, we treat organizations as clients in the definition of federated learning. This equivalency does not undermine the privacy preservation constraint of the problem and the federated learning framework. Each organization has devices and their respective database . We aim to predict the number of vehicles with historical traffic flow information from different organizations without sharing raw data and privacy leakage. We design a secure parameter aggregation mechanism as follows:

Secure Parameter Aggregation Mechanism: Detector station has devices, and the traffic flow data collected by the devices constitute a database . The deep learning model constructed in calculates updated model parameters using the local training data from . When all detector stations finish the same operation, they upload their respective to the cloud and aggregate a new global model.

Fig. 2: Secure parameter aggregation mechanism.

According to Secure Parameters Aggregation, no traffic flow data is exchanged among different detector stations. The cloud aggregates organizations’ submitted parameters into a new global model without exchanging data. (As shown in Fig. 2)

In this paper, and represent the -th timestamp in the time-series and traffic flow at the -th timestamp, respectively. Let be the traffic flow prediction function, the definitions of privacy, centralized, and federated TFP learning problems as follows:

Information-based Privacy: Information-based privacy defines privacy as preventing direct access to private data. This data is associated with the user’s personal information or location. For example, a mobile device that records location data allows weather applications to directly access the location of the current smartphone, which actually violates the information-based privacy definition [ref-11]. In this work, every device trains its local model by using local dataset instead of sharing the dataset and upload the updated gradients (i.e., parameters) to the cloud.

Centralized TFP Learning: Given organizations , each organization’s devices , and an aggregated database , the centralized TFP problem is to calculate , where is the prediction window after .

Federated TFP Learning: Given organizations and each organization’s devices , and their respective database , the federated TFP problem is to calculate where is the local version of and is the prediction window after . Subsequently, the produced results are aggregated by a secure parameter aggregation mechanism.

Iv Methodology

Traditional centralized learning methods consist of three steps: data processing, data fusion, and model building. In the traditional centralized learning context, data processing means that data feature and data label need to be extracted from the original data (e.g. text, images, and application data) before performing the data fusion operation. Specifically, data processing includes sample sampling, outlier removal, feature normalization processing, and feature combination. For the data fusion step, traditional learning models directly share data among all parties to obtain a global database for training. Such a centralized learning approach faces the challenge of new data privacy laws and regulations as organizations may disclose privacy and violate laws such as GDPR when sharing data. FL is introduced into this context to address the above challenges. However, existing FL frameworks typically employ simple machine learning models such as XGBoost and decision trees rather than complicated deep learning models

[ref-16, ref-18]. Because such models need to upload a large number of parameters to the cloud in FL framework, it leads to expensive communication overhead which can cause training failures for a single model or a global model [ref-12, ref-8]. Therefore, FL framework needs to develop a new parameter aggregation mechanism for deep learning models to reduce communication overhead.

In this section, we present two approaches to predict traffic flow, including FedGRU and clustering-based FedGRU algorithms. Specifically, we describe an improved Federated Averaging (FedAVG) algorithm with a Joint-Announcement protocol in the aggregation mechanism to reduce the communication overhead. This approach is useful to implement in the following particular scenarios.

Iv-a Federated Learning and Gated Recurrent Unit

Federated Learning (FL) [ref-16] is a distributed machine learning (ML) paradigm that has been designed to train ML models without compromising privacy. With this scheme, different organizations can contribute to the overall model training while keeping the training data locally.

Particularly, FL problem involves learning a single and globally predicted model from the database separately stored in dozens of or even hundreds of organizations [ref-17, ref-50]. We assume that a set of device stores its local dataset of size . So we can define the local training dataset size . In a typical deep learning setting, given a set of input-output pairs , where the input sample vector with features is , and the labeled output value for the input sample is . If we input the training sample vector (e.g., the traffic flow data), we need to find the model parameter vector that characterrizes the output

(e.g., the value output of the traffic flow data) with loss function

(e.g., ). Our goal is to learn this model under the constraints of local data storage and processing by devices in the organization with a secure parameter aggregation mechanism. The loss function on the data set of device is defined as:

(1)

where is the local model parameter, , and is a regularizer function. is used in Eq. (1) to illustrate that the local model in device needs to learn every sample in the local data set.

At the cloud, the global predicted model problem can be represented as follows:

(2)

we recast the global predicted model problem in (6) as follows:

(3)

The Eq. (2)-(3) illustrates the global model aggregation of model update aggregations uploaded by each device to obtain updates.

For the TFP problem, we regard GRU neural network model as the local model in Eq. (1). Cho et al. in [ref-14] proposed the GRU neural network in 2014, which is a variant of RNN that handles time-series data. GRU is different from RNN is that it adds a “Processor” to the algorithm to judge whether the information is useful or not. The structure of the processor is called “Cell.” A typical structure of GRU cell uses two data “gates” to control the data from processor: reset gate and update gate .

Let , , and be the input time series, output time series and the hidden state of the cells, respectively. At time step , the value of update gate is expressed as:

(4)

where is the input vector of the -th time step, is the weight matrix, and holds the cell state of the previous time step . The update gate aggregates and , then maps the results in

through a Sigmoid activation function. Sigmoid activation can transform data into gate signals and transfer them to the hidden layer. The reset gate

is computed similarly to the update gate:

(5)

The candidate activation is denoted as:

(6)

where represents the Hadamard product of and . The activation function can map the data to the range of (-1,1), which can reduce the amount of calculations and prevent gradient explosions.

The final memory of the current time step is calculated as follows:

(7)

Iv-B Privacy-preserving Traffic Flow Prediction Algorithm

Since centralized learning methods use central database to merge data from organizations and upload the data to the cloud, it may lead to expensive communication overhead and data privacy concerns. To address these issues, we propose a privacy-preserving traffic flow prediction algorithm FedGRU as shown in Fig. 3. Firstly, we introduce FedAVG algorithm as the core of the secure parameter aggregation mechanism to collect gradient information from different organizations. Secondly, we design an improved FedAVG algorithm with a Joint-Announcement protocol in the aggregation mechanism. This protocol uses random sub-sampling for participating organizations to reduce the communication overhead of the algorithm, which is particularly suitable for larger-scale and distribution prediction. Finally, we give the details of FedGRU, which is a prediction algorithm combining FedAVG and Joint-Announcement protocol.

Fig. 3: Federated learning-based traffic flow prediction architecture. Note that when the organization in the architecture is small-scale, we will use the FedAVG algorithm to calculate the update, and when the organization is large-scale, we will use the Joint-Announcement Protocol to calculate the update by subsampling the organizations. Details will be described in detail in following sub-subsection IV-B3.

Iv-B1 FedAVG algorithm

A recognized problem in federated learning is the limited network bandwidth that bottlenecks cloud-aggregated local updates from the organizations. To reduce the communication overhead, each client uses its local data to perform gradient descent optimization on the current model. Then the central cloud performs a weighted average aggregation of the model updates uploaded by the clients. As shown in Algorithm 2, FedAVG consists of three steps:

  1. [label=()]

  2. The cloud selects volunteers from organizations to participate in this round of training and broadcasts global model to the selected organizations;

  3. Each organization trains data locally and updates for epochs of SGD with mini-batch size to obtain , i.e., ;

  4. The cloud aggregates each organization’s through a secure parameter aggregation mechanism.

Input: Organizations . is the local mini-batch size, is the number of local epochs, is the learning rate, is the gradient optimization function.
Output: Parameter .
1Initialize (Pre-trained by a public dataset);
2 foreach round  do
3       select volunteer from organizations participate in this round of training;
4       Broadcast global model to organization in ;
5       foreach organization in parallel do
6             Initialize ;
7             ( ;
8            
9      ;
10      
11: Run on organization ;
12 (split into batches of size );
13 if each local epoch from 1 to  then
14      if batch  then
15             ;
16            
17      
return to cloud
Algorithm 1 Federated Averaging (FedAVG) Algorithm.

FedAVG algorithm is a critical mechanism in FedGRU to reduce the communication overhead in the process of transmitting parameters. This algorithm is an iterative process. For the -th round of training, the models of the organizations participating in the training will be updated to the new global one.

Iv-B2 Federated Learning-based Gated Recurrent Unit neural network algorithm

FedGRU aims to achieve accurate and timely TFP through FL and GRU without compromising privacy. The overview of FedGRU is shown in Fig. 3. It consists of four steps:

  1. [label=)]

  2. The cloud model is initialized through pre-training that utilizes domain-specific public datasets without privacy concerns;

  3. The cloud distributes the copy of the global model to all organizations, and each organization trains its copy on local data;

  4. Each organization uploads model updates to the cloud. The entire process does not share any private data, but instead sharing the encrypted parameters;

  5. The cloud aggregates the updated parameters uploaded by all organizations by the secure parameter aggregation mechanism to build a new global model, and then distributes the new global model to each organization.

Given voluntary organization and , referring to the GRU neural network in Section IV-A, we have:

(8)
(9)
(10)
(11)

where denote ’s input time series, ’s output time series and the hidden state of the cells, respectively. According to Eq. (3), the objective function of FedGRU is as follows:

(12)

The pseudocode of FedGRU is presented in Algorithm 3:

Input: , , and . The mini-batch size , the number of iterations and the learning rate . The optimizer .
Output: , and
1According to , , and Eq. (8)–(12), initialize the cloud model , , , , , and ;
2 foreach round  do
3       select volunteer from organizations to participate in this round of training;
4       while  has not convergence do
5             foreach organization in parallel do
6                   Conduct a mini-batch input time step ;
7                   Conduct a mini-batch true traffic flow ;
8                   Initalize ;
9                   ;
10                   ;
11                   Update the parameters , , , and ;
12                   Update reset gate and update gate ;
13                  
14            
15      Collect the all parameters from to update . (Referring to the Algorithm 1.);
16      
return , and
Algorithm 2 Federated Learning-based Gated Recurrent Unit neural network (FedGRU) algorithm.

Iv-B3 Joint-Announcement Protocol

Generally, the number of participants in FedGRU is small. For example, WeBank worked with 7 auto insurance companies to build a traffic violation insurance pricing model using FL333https://www.fedai.org/cases/a-case-of-traffic-violations-insurance-using-federated-learning/. In this case, since there are only 8 participants, we can define it as a small-scale federated learning model. However, a large number of participants may join FedGRU for traffic flow forecasting. When FedGRU is expanded to a large-scale scenario with many participants, FedAVG algorithm is hard to converge because of expensive communication overhead, thereby the accuracy of FedGRU will decrease. To address this issue, we design an improved FedAVG algorithm with a Joint-Announcement protocol in the aggregation mechanism to randomly select a certain proportion of organizations from a large number of participants in the -th round training.

The participants in the Joint-Announcement protocol are organizations and the cloud, which is a cloud-based distributed service [ref-19]. For -th round of training, the protocol consists of three phases: preparation, training, and aggregation. The specific implementation phases of the protocol are given as follows:

  1. [label=)]

  2. Phase 1, Preparation: Given a FL task (i.e., traffic flow prediction task in this paper), the organizations that voluntarily participate will check-in with the Cloud (as shown in Fig. 4–1⃝). Who rejects ones represent if unwillingness to participate in this task or have other failures.

  3. Phase 2, Training: First, the cloud loads the pre-trained model (as shown in 2⃝). Then the cloud sends the model checkpoint (i.e., gradient information) to the organizations (as shown in Fig. 4–3⃝). The cloud randomly selects a fixed proportion (e.g., 10%, 20%, ) of organizations to participate in this round of training (as shown in Fig. 4–4⃝). Each organization will train the data locally and send the parameters to the cloud.

  4. Phase 3, Aggregation: Subsequently, the cloud aggregates the parameters uploaded by organizations to update the global model through the security parameter aggregation mechanism (as shown in Fig. 4–5⃝). In this mechanism, the cloud executes FedAVG algorithm (presented in Section IV-B1) to reduce the uplink communication costs. The cloud updates the global model by sending checkpoints to persistent storage (as shown in Fig. 4–6⃝). Finally, the global model sends update parameters to each organization.

Fig. 4: Federated learning joint-announcement protocol.

Iv-C Ensemble Clustering Federated Learning-Based Traffic Flow Prediction Algorithm

Since the organizations select FL tasks based on its location information in the federated traffic flow prediction learning problem, for the same FL task, better spatio-temporal correlation of the data, leads to better performance. Based on the above hypothesis, we propose the ensemble clustering-based FedGRU algorithm to obtain better prediction accuracy and handle scenarios in which a large number of clients jointly work on training a traffic flow prediction problem by grouping organizations into

clusters before implementing FedGRU. Then it integrates the global model of each cluster center by using an ensemble learning scheme, thereby obtains the best accuracy. In this scheme, the clustering decision is determined by using the latitude and longitude information of the organizations. We use the constrained K-Means algorithm proposed in

[ref-21]. According to the definition of the constrained K-Means clustering algorithm, our goal is to determine the cluster center that minimizes the Sum of Squared Error (SSE). More formally, given a set of points in (i.e organizations’ location information) and the minimum cluster membership values , cluster centers at iteration , compute at iteration in the following three steps as follows:

  1. [label=)]

  2. Sum of the Euclidean Metric Distance Squared:

    (13)

    where

    is a solution to the following linear program with

    fixed.

  3. Cluster Assignment: To minimize SSE, we have

    (14)
  4. Cluster Update: Update as follows:

    (15)

If and only if SSE is minimum and , we can obtain the optimal clustering center and the optimal set . Let denote the global model of the optimal set . As shown in Fig. 5, we utilize the ensemble learning scheme to find the optimal ensemble model by integrating the global model from with the best accuracy after executing the constrained K-Means algorithm. More formally, such a scheme needs to find an optimal global model subset of the following equation:

(16)

The ensemble clustering-based FedGRU is thus presented in Algorithm 3. It consists of three steps:

  1. [label=)]

  2. Given organization set , we random initialize cluster centers , and execute the constrained K-Means algorithm;

  3. With the optimal clustering center and the optimal set , the cloud executes the ensemble scheme to find the optimal global model set (i.e., subset of );

  4. The cloud sends the new global model to each organization.

Input: Organizations set .
Output: , and
1Initialize random cluster center ;
2 while  do
3       Execute the constrained K-Means algorithm’s step 1 and step 2 (Referring to Eq. (13)–(14));
4       Update according to Eq. (15) in step 3 of the constrained K-Means algorithm;
5      
6foreach clustering center and the optimal set  do
7       Execute FedGRU algorithm;
8      
9Obtain the global model set ;
10 Execute the ensemble learning scheme to find the optimal global model subset (Referring to Eq. (16);
11 The cloud sends the new global model to each organization;
return , and .
Algorithm 3 Ensemble clustering federated learning-based FedGRU algorithm.
Fig. 5: Ensemble clustering federated learning-based traffic flow prediction scheme.

V Experiments

In this experiment, the proposed FedGRU and clustering-based FedGRU algorithms are applied to the real-world data collected from the Caltrans Performance Measurement System (PeMS) [ref-13] database for performance demonstration. The traffic flow data in PeMS database was collected from over 39,000 individual detectors in real time. These sensors span the freeway system across all major metropolitan areas of the State of California [ref-1]. In this paper, traffic flow data collected during the first three months of 2013 is used for experiments. We select the traffic flow data in the first two months as the training dataset and the third month as the testing dataset. Furthermore, since the traffic flow data is time-series data, we need to use them at the previous time interval, i.e., , to predict the traffic flow at time interval , where is the length of the history data window.

We adopt Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) to indicate the prediction accuracy as follows:

(17)
(18)
(19)
(20)

Where is the observed traffic flow, and is the predicted traffic flow.

Without loss of generality, we assume that the detector stations are distributed and independent and the data cannot be exchanged arbitrarily among them. In the secure parameter aggregation mechanism, PySyft [ref-23] framework is adopted to encrypt the parameters444https://github.com/OpenMined/PySyft. The FedGRU code is available at https://github.com/niklausliu/TF_FedGRU_demo.

For the cloud and each organization, we use mini-batch SGD for model optimization. PeMS dataset is split equally and assigned to 20 organizations. During the simulation, learning rate , mini-batch size , and . Note that the client of the FedGRU model is the default setting in FL [ref-16]

. All experiments are conducted using TensorFlow and PyTorch

[pytorch] with Ubuntu 18.04.

V-a FedGRU Model Architecture

In the context of deep learning, proper hyperparameter selection, e.g. the size of the input layer, the number of hidden layers, and hidden units in each hidden layer, is a notable factor that determines the model performance. In this section, we investigate the performance of FedGRU with different hyperparameter configurations and try to determine the best-performing neural network architecture for it. Additionally, we also obtain the optimal length of history data window

. In particular, we employ , the number of hidden layers in , and the number of hidden units in [ref-1] to adjust the structure of FedGRU. Furthermore, we utilize the grid search approach to find the best architecture for FedGRU.

We first evaluate the performance of FedGRU on a 5-min traffic flow prediction task through MAE, MSE, RMSE, and MAPE. After performing the grid search, we obtain the best architecture of FedGRU as shown in Table I. The optimal architecture consists of two hidden layers, each with a hidden layer number of . The results show that the optimal number of hidden layers in our experiment is two. From a model perspective, the number of hidden layers of FedGRU model should not be too small or too large. Our results confirm these facts.

Metrics Time steps Hidden layers Hidden units MAE MSE RMSE MAPE
FedGRU (default setting) 5 1 50 9.03 103.24 13.26 19.12
100 8.96 102.36 14.32 18.94
150 8.46 101.05 14.98 18.46
2 50, 50 7.96 101.49 11.04 17.82
100, 100 8.64 102.21 15.06 19.22
150, 150 8.75 102.91 14.93 19.35
3 50, 50, 50 8.29 102.17 12.05 18.62
100, 100, 100 8.41 103.01 13.45 18.96
150, 150, 150 8.75 103.98 13.74 19.24
TABLE I: Structure of FedGRU For Traffic Flow Prediction
Metrics MAE MSE RMSE MAPE
FedGRU (default setting) 7.96 101.49 11.04 17.82%
GRU [ref-15] 7.20 99.32 9.97 17.78%
SAE [ref-1] 8.26 99.82 11.60 19.80%
LSTM [ref-32] 8.28 107.16 11.45 20.32%
SVM [ref-54] 8.68 115.52 13.24 22.73%
TABLE II: Performance Comparsion of MAE, MSE, RMSE, And MAPE For FedGRU, LSTM, SAE, And SVM

V-B Traffic Flow Prediction Accuracy

We compared the performance of the proposed FedGRU model with that of GRU, SAE, LSTM, and support vector machine (SVM) with an identical simulation configuration. Among these five competing methods, FedGRU is a federated machine learning model, and the rest are centralized ones. Among them, GRU is a widely-adopted baseline model that has good performance for traffic flow forecast tasks, as aforementioned in Section IV, and SVM is a popular machine learning model for general prediction applications [ref-1]. In all investigations, we use the same PeMS dataset. The prediction results are given in Table II for 5-min ahead traffic flow prediction. From the simulation results, it can be observed that MAE of FedGRU is lower than those of SAEs, LSTM, and SVM but higher than that of GRU. Specifically, MAE of FedGRU is 9.04% lower than that of the worst case (i.e., SVM) in this experiment. This result is contributed by the fact that FedGRU inherits the advantages of GRU’s outstanding performance in prediction tasks.

Fig. 6 shows a comparison between GRU and FedGRU for a 5-min traffic flow prediction task. We can find that the predict results of FedGRU model are very close to that of GRU. This is because the core technique of FedGRU to prediction is GRU structure, so the performance of FedGRU is comparable to GRU model. Furthermore, FedGRU can protect data privacy by keeping the training dataset locally. Fig. 6 illustrates the loss of GRU model and FedGRU model. From the results, the loss of FedGRU model is not significantly different from GRU model. This proves that FedGRU model has good convergence and stability. In a word, FedGRU can achieve accurate and timely traffic flow prediction without compromising privacy.

Fig. 6: (a) Traffic flow prediction of GRU model and FedGRU model. (b) Loss of GRU model and FedGRU model.

V-C Performance Comparison of FedGRU Model Under Different Client Numbers

In Section V-B, the default client number is set . However, it is highly plausible that traffic data can be gathered by more than two entities, e.g., organizations and companies. In this experiment, we explore the impact of different client numbers (i.e., ) on the performance of FedGRU. The simulation results are presented in Fig. 7, where we observe that the number of clients has an adverse influence on the performance of FedGRU. The reason is that more clients introduce increasing communication overhead to the underlying communication infrastructure, which makes it more difficult for the cloud to simultaneously perform aggregation of gradient information. Furthermore, such overhead may cause communication failures in some clients, causing clients to fail to upload gradient information, thereby reducing the accuracy of the global model.

Fig. 7: The prediction error of FedGRU model with different client numbers.

In this paper, we initially use FedAVG algorithm to alleviate the expensive communication overhead issue. FedAVG reduces communication overhead by i) computing the average gradient of a batch size samples on the client and ii) computing the average aggregation gradient from all clients. Fig. 7 shows that FedAVG performs well when the number of clients is less than 8, but when the number of clients exceeds 8, the performance of FedAVG starts to decline. The reason is that, when the number of clients exceeds a certain threshold (e.g.,

), the probability of client failure will increase, which causes FedAVG to calculate wrong gradient information. Nevertheless, FedAVG is significant for reducing communication overhead because the number of entities involved in predicting traffic flow tasks in real life is usually small. Therefore, we need to propose a new communication protocol for large-scale organizations to solve the problem of communication overhead.

Fig. 8: Communication overhead between large-scale FedGRU and FedGRU.
Fig. 9: The prediction results of models with different participation ratios ().
Fig. 10: Loss of FedGRU model with different participation ratios. ()

V-D Traffic Flow Prediction With Large-scale FedGRU Model

In Section V-C, the experimental results show that FedAVG is no longer suitable for large-scale organizations when . However, in real life, we sometimes cannot avoid large-scale organization participation in FedGRU model. To solve this problem, we design the joint-announcement protocol, which can randomly select a certain proportion of organizations from a large number of participating organizations to participate in the -th round training. In this experiment, we set the participation ratio and set . Then we compare these four cases with the ones of section V-C.

In this experiment, we first focus on the communication overhead of a large-scale FedGRU model. In Fig. 8, we show that FedGRU with joint-announcement protocol can significantly reduce the communication overhead. Specifically, when , the communication overhead of FedGRU with the joint-announcement protocol is reduced by 64.10% compared to FedGRU with FedAVG algorithm. The Joint-announcement protocol first performs sub-sampling on participating organizations, which can reduce the number of participants. Then it uses FedAVG algorithm to calculate the average gradient information, which guarantees the reliability of model training. Furthermore, experimental results show that such a protocol is robust to the number of participants, that is, the performance of the protocol is not affected by the number of participants.

Fig. 9 shows the prediction results of models with different participation ratios. It shows that when , the prediction results of large-scale FedGRU is the most different from FedGRU. When , MAE of large-scale FedGRU is 29.08% upper than MAE of FedGRU. This is because the performance of FedAVG starts to decrease when (as shown in Fig. 9), and FedGRU with joint-announcement protocol can control the number of participants through sub-sampling to maintain the performance of FedAVG. In Fig. 10, we can find the loss of models with different . It shows that the larger of the model, the greater the loss in the early training period. But does not affect the convergence of the models. Therefore, FedGRU using joint-announcement protocol can maintain good stability, robustness and efficiency.

V-E Traffic Flow Prediction With Ensemble Clustering-Based FedGRU Model

In this subsection, we evaluate an ensemble clustering-based FedGRU in scenarios where a large number of clients jointly work on training a traffic flow prediction problem. In particular, we examine the effect of on the proposed ensemble clustering-based mechanism. In Table III, we show the accuracy of Clustering-Based FedGRU model when the cluster centers . The results indicate that the proposed ensemble clustering-Based FedGRU model achieves the best prediction accuracy and can further improve the performance of the original FedGRU model. Compared with GRU model, the Clustering-Based FedGRU model can even outperform the centralized GRU model when

, which still compromises the data privacy. The reason is that the ensemble clustering-based scheme can improve the prediction accuracy by classifying similar spatio-temporal features into the same cluster and integrating the advantages of the optimal global model set. Furthermore, in such a scheme, it is easy for FedGRU to find the optimal global model subset because

is relatively small. Therefore, our proposed ensemble clustering-based federated learning scheme can further improve the accuracy of prediction, thereby achieving accurate and timely traffic flow prediction.

Metrics MAE MSE RMSE MAPE
FedGRU 7.96 101.49 11.04 17.82%
FedGRU () 7.89 100.98 10.82 17.16%
FedGRU () 7.42 99.98 10.01 16.85%
FedGRU () 7.17 99.16 9.86 16.22%
FedGRU () 6.85 97.77 9.49 14.69%
TABLE III: Performance comparison of FedGRU algorithm and Clustering-Based FedGRU algorithm

V-F Discussion

In this subsection, we further discuss the advantages and limitations of FedGRU in different scenarios for predicting traffic flow. In the previous subsections, we carry out a series of comprehensive case studies to show the effectiveness of our proposed method. Based on the above empirical results, the following observations can be derived:

  1. [label=)]

  2. Communication overhead is the bottleneck of FedGRU model. For a large-scale FedGRU model, the joint-announcement protocol helps solve the communication overhead problem. It mitigates the communication overhead of FedGRU model by reducing the number of participants participating in each round of communication.

  3. Global model updates for the client in FedGRU model are not synchronized. For example, FedGRU may fail to synchronize global model updates due to the failure of some clients. This may potentially make the local model of arbitrary clients deviate from the current global one, which affects the next global model update. To address this problem, we use random sub-sampling to select organizations that participate in the -th round of training, as reducing the number of participants participating in the -th training can reduce the probability of client failure, thereby alleviating the out-of-sync issue.

  4. Statistical heterogeneity issue is a problem that needs to be solved. Due to a large number of organizations involved in training, the large-scale FedGRU model faces a challenge: the local data are not i.i.d. [ref-47, ref-52]. Organizations often generate and collect data across the network in a completely different way [ref-48]. This data generation paradigm violates the i.i.d. assumption commonly used in distributed optimization, increasing the likelihood of sprawl and possibly increasing the complexity of modeling, analysis, and evaluation [ref-49].

V-G Privacy Analysis

According to the definition of information-based privacy, we discuss the privacy protection capabilities of the proposed FedGRU model from the following aspects:

  1. [label=)]

  2. Data Access: The proposed model is developed based on the federated learning framework, and its core idea is a distributed privacy protection framework. Specifically, the proposed model achieves accurate traffic flow prediction by aggregating encrypted parameters rather than accessing the original data, which guarantees the model’s privacy protection for the data.

  3. Model Performance: Experimental results show that the performance of the proposed model is comparable to GRU model. GRU model is a centralized machine learning model, which needs to aggregate a large amount of raw data to achieve high-precision traffic flow prediction. Furthermore, there is a trade-off between traffic flow prediction and privacy. The proposed model achieves comparable results to a centralized machine learning approach under the constraint of privacy preservation, therefore demonstrates its superiority.

Vi Conclusion

In this paper, we propose a FedGRU algorithm for traffic flow prediction with federated learning for privacy preservation. FedGRU does not directly access distributed organizational data but instead employs secure parameter aggregation mechanism to train a global model in a distributed manner. It aggregates the gradient information uploaded by all locally trained models in the cloud to construct the global one for traffic flow forecast. We evaluate the performance of FedGRU on a PeMS dataset and compared it with GRU, LSTM, SAE, and SVM, which all potentially compromise user privacy during the forecast. The results show that the proposed method performs comparably to the competing methods with minuscule accuracy degradation with privacy well-preserved. Furthermore, we apply an ensemble clustering-based FedGRU for TFP to further improve the model performance. We also demonstrate by empirical studies that the proposed joint-announcement protocol is efficient in reducing the communication overhead for FedGRU by 64.10% compared with centralized models.

To the best of our knowledge, this is among the pioneering work for traffic flow forecasts with federated deep learning. In the future, we plan to apply Graph Convolutional Network (GCN) [ref-44] to the federated learning framework to better capture the spatial-temporal dependency among traffic flow data to further improve the prediction accuracy.

References