PerFED-GAN: Personalized Federated Learning via Generative Adversarial Networks

02/18/2022
by   Xingjian Cao, et al.
IEEE
14

Federated learning is gaining popularity as a distributed machine learning method that can be used to deploy AI-dependent IoT applications while protecting client data privacy and security. Due to the differences of clients, a single global model may not perform well on all clients, so the personalized federated learning method, which trains a personalized model for each client that better suits its individual needs, becomes a research hotspot. Most personalized federated learning research, however, focuses on data heterogeneity while ignoring the need for model architecture heterogeneity. Most existing federated learning methods uniformly set the model architecture of all clients participating in federated learning, which is inconvenient for each client's individual model and local data distribution requirements, and also increases the risk of client model leakage. This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs) that allows each client to design its own model to participate in federated learning training independently without sharing any model architecture or parameter information with other clients or a center. In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42 significantly.

READ FULL TEXT VIEW PDF

Authors

page 1

page 9

page 11

06/27/2022

An Empirical Study of Personalized Federated Learning

Federated learning is a distributed machine learning approach in which a...
08/07/2020

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

Federated learning is a popular distributed machine learning paradigm wi...
07/09/2020

Client Adaptation improves Federated Learning with Simulated Non-IID Clients

We present a federated learning approach for learning a client adaptable...
12/03/2018

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

Federated learning, i.e., a mobile edge computing framework for deep lea...
06/16/2022

Personalized Federated Learning via Variational Bayesian Inference

Federated learning faces huge challenges from model overfitting due to t...
09/16/2021

Subspace Learning for Personalized Federated Optimization

As data is generated and stored almost everywhere, learning a model from...
05/03/2022

FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning

Robustness is becoming another important challenge of federated learning...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Internet of Things (IoT) devices are growing dramatically. It is estimated that in the next 10 years, the total number of IoT devices will reach hundreds of billions

[31], and these devices are used in fields such as intelligent transportation[20], intelligent industry[13], healthcare and intelligent monitoring[14], [28]. These IoT devices generate massive and rapidly growing data, and present attractive opportunities for data-driven artificial intelligence techniques[26]. Centralized machine learning schemes can be used for these smart IoT applications. However, centralized machine learning techniques have an inherent problem of leaking user privacy because end-device data needs to be transferred to a centralized third-party server for training. Furthermore, centralized machine learning may not be feasible when the amount of data is very large and located in multiple locations. Although distributed machine learning can solve some problems of centralized machine learning, traditional distributed machine learning methods still lack the protection of data security and privacy of participants.

Federated learning is a distributed machine learning method proposed in [22], and it can train machine learning models using distributed data stored in different clients while avoiding client data leakage. The clients in the scenario where federated learning was first proposed are various edge devices such as mobile phones[15], [20]

. The center trains a neural network global model using the data stored in each client device. The center creates the global model architecture, which is shared by all clients. The need for personalized federated learning was then proposed

[12], [1], [25], [21], where the needs and data distribution of individual clients are typically quite different. For example, each client needs to train a shopping recommendation model for its customers, and each client’s customer groups are geographically diverse. A global model is difficult to fit all clients due to factors such as different consumer preferences and spending power in different regions, and the data distribution of different clients may also be non-independent and identically distributed (Non-IID). As a result, the goal of personalized federated learning is to train a personalized model for each individual client that is more tailored to their specific needs, rather than a single global model that serves all clients.

Although many personalized federated learning methods have been proposed, the majority of these methods require all clients’ model architectures to be consistent, and model personalization is reflected in differences in model parameters. However, when the clients participating in federated learning belong to different organizations, the differences in client models are likely not limited to the parameters of the models, but also include different model architectures. Because the machine learning model architectures deployed by different organizations on their client devices are designed for their specific task requirements, the client model architectures of different organizations are typically not the same. We believe that the consistency of the model architecture limits the scope of personalized models. Additionally, sharing the model architecture results in the leakage of the model design, which may be the intellectual property rights of participating organizations. Therefore, if the model architecture is known, maliciously participating clients will be more involved. It is simple to use the model parameters and training process to infer other clients’ private training data[24], [27], and this invalidates the role of federated learning in protecting client privacy.

The effectiveness of federated learning stems from knowledge sharing among clients, and personalized federated learning aims to share common knowledge among clients while maintaining their personalized knowledge. Existing federated learning methods share knowledge via model parameters, and necessitate consistent model architectures, which not only weakens the personality of the client model but also increases the risk of model leakage. This paper proposes a federated learning strategy based on generating adversarial sample sharing to overcome this issue. The proposed strategy allows each client to independently design the neural network model architecture without disclosing it to other clients or the center. The knowledge of each client’s training data does not need to be shared in the form of model parameters, but can be obtained by sharing the samples generated by generated adversarial networks (GANs). Our main contributions are as follows:

  • We propose a personalized federated learning method where clients train personalized models for their individual needs.

  • The proposed method is applicable to federated learning client neural network models of various architectures and outperforms existing methods for Non-IID data.

  • We theoretically analyze the convergence of the proposed method and derive its convergence conditions.

  • We conduct many experiments to validate the efficacy of the proposed method and analysis conclusions.

The remainder of this paper is organized as follows. We briefly review the related studies in Section II. We formulate the problem statement in Section III. In Section IV, we describe the proposed PerFED-GAN method in details. Section V presents the experimental details and results, and Section VI summarizes this paper.

Ii Related Work

Ii-a Personalized Federated Learning

In 2017, McMahan et al. [22] proposed the Federated averaging (FedAvg) algorithm to solve the federated optimization problem on edge devices. In the FedAvg method, a central server orchestrates the following training process. The central server broadcasts the global model to the selected clients (edge devices).

Then, the selected clients locally update their models by running stochastic gradient descent (SGD) on their local private datasets.

After that, The central server collects an aggregate of the client updates and updates the global model by averaging these aggregated updates.

This process template for federated learning training that encompasses FedAvg and many of its variants by Wang et al. [34] and Li et al. [18]

, works well for many federated learning settings, where all clients usually serve a unified task and model designed by the center, and the center finely controls local training options (e.g. learning rate, number of epochs and mini-batch size). However, these methods are not compatible with personalized tasks, models, or training of different clients. For example, FedProx by Li

et al. [18] introduces proximal terms to improve FedAvg in the face of system heterogeneity (i.e., many stragglers) and statistical heterogeneity, but it still inherits FedAvg polymerization parameters manner, therefore is not compatible with heterogeneous models, especially different architecture models.

In recent years, many efforts have been made to tackle the personalized tasks of clients in federated learning settings. Wang et al. [33] and Jiang et al. [12] use the federated learning trained model as a pretrained or meta-learning model, and fine-tune the model to learn the local task of each client. Arivazhagan et al. [1] use personalized layers to specialized the global model trained by federated learning to learn the personalized tasks of clients. Recently, Hanzely et al. [9] and Deng et al. [5] modify the original FedAvg, instead of aggressive averaging all the model parameters, they find that only steering the local parameters of all clients steps towards their average helps each client to train its personalized model. Besides, Smith et al. [29] propose a federated multi-task learning framework MOCHA, which clusters tasks based on their relationship by an estimated matrix. Huang et al. [11] propose a personalized cross-silo federated learning method FedAMP by a novel attentive message passing mechanism that adaptively facilitates the underlying pairwise collaborations between clients by iteratively encouraging similar clients to collaborate more.

However, all these methods require clients to upload their model parameters for global aggregation, which may leak client models. Recently, several secure computing techniques have been introduced to protect data and model parameters in federated learning, including differential privacy [37], [10], [38], secure multi-party computing [39], [19], [41], homomorphic encryption [6], [30], and trusted execution environments [23], [4], but these methods still have some disadvantages, such as a significant amount of communication or computational cost, or relying on specific hardware for implementation.

Another restriction of these federated learning approaches for personalized tasks is that the architecture of each client’s model must be consistent because model parameters are typically aggregated or aligned during the federated learning training processes of these methods. It prevents clients from independently designing unique model architectures. To address this issue, FedMD [17] proposed by Li et al.

, leverages knowledge distillation to convey the knowledge of each client’s local data, by aligning the logits of multiple neural networks on a public dataset. FedMD’s client models can be neural networks of various architectures, as long as they are consistent in the logits output layers. However, FedMD requires a large amount of labeled data as a public dataset for client models alignment. Collecting labeled data is difficult, and open large-scale labeled datasets are usually rare, which limits the application scenarios of FedMD. Similarly, Guha

et al. [8] use distillation models to generate a global ensemble model, which needs to share the clients’ local models or their distillation models. However, it exposes the clients’ private data or local model information to the central server, which makes this solution unsuitable for customers who want to avoid model leakage.

Ii-B Co-training

Co-training, which is originally proposed by Blum et al. [2]

, is a semi-supervised learning technique exploiting unlabeled data with multi-views. The

view

refers to a subset of attributes of an instance. It trains two different classifiers on two views; after that, both classifiers pseudo-label some unlabeled instances for another; then each classifier can be retrained on its original training set and the new set pseudo-labeled by another. Co-training repeats the above steps to improve the performance of classifiers until it stops. Blum

et al. proves that when the two views are sufficient (i.e., each view contains enough information for an optimal classifier), redundant and conditionally independent, co-training can improve weak classifiers to arbitrarily high by exploiting unlabeled data. Wang et al. [36] prove that if the two views have large diversity, co-training suffers little from the label noise and sampling bias, and could output the approximation of the optimal classifier by exploiting unlabeled data even with insufficient views.

For the single view setting, Zhou et al. [40] propose the tri-training, which uses 3 classifiers having large diversity to vote for the most confident pseudo-label of unlabeled data. Furthermore, Wang et al. [35] prove that classifiers with large diversity can also improve model performance in a single view setting. At the same time, they also point out the reason why the performance of classifiers in co-training saturates after some rounds. That is, classifiers learn from each other and gradually become similar, and their differences are insufficient to support further performance improvement.

Different clients in personalized federated learning settings typically have insufficient local data that is solely one-sided for the total distribution, especially in Non-IID data settings. As a result, models trained on local data from multiple clients may have a high degree of variety, which can lead co-training to work.

Ii-C Generative Adversarial Network

Generative adversarial network (GAN) [7] is a type of machine learning framework designed by Ian Goodfellow and his colleagues in 2014. For a given training set, the technique learns to generate new data samples with the same distribution of it.

GAN is mainly composed of two parts: a discriminator network and a generator network. The core idea of GAN is to train indirectly through the discriminator network, and the discriminator network itself is also dynamically updated. This basically means that the generator network is not trained to minimize the distance to a particular object, but is used to fool the discriminator network.

The generating network generates candidates, and the discriminator network evaluates them. The competition is conducted in terms of data distribution. Generally, the generative network learns to map from the latent space to the data distribution of interest, and the discriminator network distinguishes the candidates generated by the generator from the real data distribution. The training goal of the generator network is to increase the error rate of the discriminator network.

The initial training dataset is used for the discriminator network. The training process involves showing it the samples from the training dataset. The generator is trained according to whether it succeeds in fooling the discriminator. Usually, the seed of the generator is a random input sampled from a predefined latent space. Thereafter, the candidates synthesized by the generator are evaluated by the discriminator.

Iii Problem Statement

Considering a personalized federated learning setting, there are clients, and each client () has its private dataset from distributions . Since different client data in personalized federated learning scenarios are generally Non-IID, that is:

(1)

Each also has its model where is the model architectures and is the corresponding model parameters of . In a personalized federated learning scenario compatible with heterogeneous model architectures, we assume that the model architectures of different clients can be different, that is:

(2)

At this time, the of and the of are also different. The difference is not only the difference in parameter values, but may even be the number of parameters, because different model architectures are likely to have different number of model parameters. Even in the coincidental case where the parameter number of and are equal, the parameter values of and have different meanings, so comparing their values makes no sense.

Define a function to evaluate the performance of the model for the sample , the smaller the value of the function implies the better the model performance for . For a classification task, when is correctly classified by , the output of is 0, otherwise it is 1. For a client , its task objective is to optimize for its given model architecture so as to minimize the expectation of the function on :

(3)

The is the expectation function. For classification tasks, Formula (3) means optimizing to minimize the generalization classification error of the model . Assume that the optimal model parameters obtained by local training of client are for :

(4)

The aim of personalized federated learning is to training a personalized model for each client collaboratively, without exposing to (, ):

(5)

The performance of the model obtained by federated training should be sufficiently high. Specifically, the model performance of each client in federated learning should not be lower than the local training model performance for :

(6)

For classification tasks, that is, the classification generalization accuracy of the federated training model of each participant is not lower than that of the local training model.

Iv PerFED-GAN

In this part, we introduce the main ideas of the PerFED-GAN method for personalized federated learning training, and analyze its algorithm complexity and convergence.

Iv-a The Overall Steps of PerFED-GAN

The primary motivation for federated learning is to increase client model performance. The reason for the poor performance of the locally trained model is frequently a lack of local data, which prevents the model from learning enough task expertise by training on only the local dataset. As a result, in order to increase the performance of the client models, they must be able to learn from other clients. The most popular and direct ways to communicate knowledge are to share data or models, however these are disallowed in our federated learning environment, therefore we need to develop alternative methods.

The study related co-training shows that, to improve the performance of the classification task model, a large enough diversity between models is required [35]. In general, generating models with substantial divergences in single view settings is difficult. In federated learning contexts, however, client models may have quite distinct architectures and be trained on individualized private datasets that are very likely to be Non-IID. All of these factors may result in variations in the models of different clients. As a result, our central idea is to use the model trained on each client’s local data as the discriminator network, and to generate fresh samples. Following the receipt of these created datasets, other clients use them to train their local models further, thereby boosting the performance of their tailored models. We give the overall steps of PerFED-GAN as follows and Fig.1.

Fig. 1: Framework of PerFED-GAN (3 clients example).
  1. Local model training: Each client trains independently its model on its local dataset.

  2. GAN training: Each client uses the local model trained in the previous step as a discriminator network to train a generator network, and uses it to generate a new dataset.

  3. Communication: After that, the center aggregates the generated data samples collected from the clients according to the label, and sends the results back to each client.

  4. Client model updating: Each client updates its model by training it on the new dataset merged from its local dataset and the received new dataset. After that, Step 2) to Step 4) are executed in a loop to further improve the performance of the personalized federated learning model.

It should be pointed out that the local model training method and the GAN training method used in PerFED-GAN are modular and replaceable. In theory, almost all neural network training methods and GAN training methods can be applied to PerFED-GAN.

After the GAN training stage, the central server collects all categories of data from the data generated by each client during the aggregation process. For example, there are 3 categories, A, B, and C. Client 1 generates a dataset {A1, A2, B1, B2, C1, C2}, Client 2 generates a dataset {A3, A4, B3, B4, C3, C4}, and Client 3 generates a dataset {A5, A6, B5, B6, C5, C6}. The central server aggregates these to a large dataset {A1, A2, …, A6, B1, B2,…,B6, C1, C2,…,C6}, and then, the center random selects some samples from the large dataset to send to each client. The in-processing training is to train the client model on its private dataset and the aggregated dataset received from the center. The aggregated dataset received by each client from the center contains generated samples from other clients. These samples and their corresponding labels can be regarded as other clients’ classification result on an unlabeled sample, because the samples are generated by other clients according to the target labels. At this point, in-processing training with the aggregated dataset is equivalent to training with the results of labeling unlabeled data with another classifier in co-training.

Iv-B Algorithm Analysis

PerFED-GAN is a personalized federated learning scheme, which is characterized by supporting clients with different model architectures, and its detailed algorithm flow is shown in Algorithm 1. is the model architectures and is the corresponding model parameters of . is the generator network with its parameters . is the local training dataset of Client . MAXROUND is the maximum communication rounds. is the dataset containing the generated samples by with their generating labels. is a dataset allocated in center server for storing the samples in uploaded by each client. is a dataset containing all samples in and the random selected samples from .

Some parts of the PerFED-GAN algorithm are independently replaceable, including local model initialization training methods, GAN training methods and local model update training methods. In theory, any method that can be used to train a neural network can be used as a replacement. Therefore, PerFED-GAN can take advantage of the latest neural network optimization research results to enhance the training effect and hardly need to make more additional changes.

The client-side execution part of the algorithm can be performed by different clients in parallel and asynchronously, and in the task executed by the central server, the time complexity of merging the datasets uploaded by each client is mainly proportional to the total number of samples uploaded by all clients. The number of uploaded samples can be adjusted according to actual conditions, and more samples can be uploaded when the communication bandwidth is sufficient. In addition, it is also a feasible strategy to directly upload the generation network to the center and generate samples by the center, but it may also bring additional risk of model privacy leakage. In addition, the impact of uploading a generative model or generating samples on the efficiency of training time should also be evaluated according to various factors such as communication capacity, sample size, size of the generative model, and computing resources of the central server.

1
Input:
Output:
 // Clients
2 for  to parallel do
3       Update by local training on .
4for  to MAXROUND do
        // Clients
5       for  to parallel do
6            Update by training the GAN consists of (discriminator) and (generator) on . Generate dataset from random seeds by . Upload to center server.
       // Center
7       = EMPTY SET containing all classes. for  to  do
8             Merge into according to the class of each samples of .
9      for  to  do
10             Randomly select samples from and send them to the Client .
       // Clients
11       for  to parallel do
12            Receive and merge the samples from center to to get . Update by local training on .
13      
14return
Algorithm 1 PerFED-GAN

Iv-C Convergence Analysis

Given is a input space, and is a output space. is with distribution . For simplicity, suppose . Let denote the hypothesis space, , and the ground truth , which means the generalization error of is 0.

Definition 1. Suppose we have two classifiers , we define the generalization disagreement between them as:

(7)

Obviously, for two classifier ,

and for , its generalization error can be expressed as . If

(8)

where , we can say that the generalization error of the classifier is less than with the confidence parameter . That is, the difference between the classifier and the oracle is bounded by

with high probability (more than

). For a certain , a better classifier corresponds to a smaller , and vice versa.

Theorem 1. Given two initial training datasets of size and of size , the size of which are enough to train two classifier and , and the generalization error of them is and respectively with the confidence parameter . A GAN using as a discriminator network generates number of samples, dataset , and put them into which consists of and . Then is trained from by minimizing the empirical risk. If

(9)
(10)
(11)

where , then

(12)

where

Proof. Let to denote the expected rate of disagreement between and , then,

(13)

where consists of and , and is generated by the GAN whose discriminator is , then

(14)

By minimizing the empirical risk, the training process is to generate the classifier which has the smallest observed rate of disagreement on training dataset. It means that traing is equivalent to minimizing . In order to train a better classifier whose generalization error is bounded by , the dataset should guarantee that the probability of classifier whose generalization error is no less than has a lower observed rate of disagreement on than is small enough (less than ).

The generalization error of classifier is upper bounded by with confidence parameter , so is no bigger than . The probability that the classifier has a lower observed rate of disagreement on than is less than

(15)

Since that

(16)

If , then

(17)

Since Formula (15) is monotonically decreasing as increases when , and , the value of Formula (15) is less than

(18)

That is, the probability that the classifier has a lower observed rate of disagreement on than is less than Formula (18). The value of Formula (18) can approximately calculated by Poisson Theorem

(19)

Assuming , then

(20)

Since , then

(21)

Considering that there are at most classifiers having generalization error no less than whose observed rate of disagreement with is lower than . Therefore

(22)

and Theorem 1 is proved.

From the Formulas (16) and (12), we can find that when is bigger, the lower bound of the generalization error of is lower at the same confidence level. Since is trained on and , and is generated by the GAN whose discriminator is , the is mainly dependent on how great divergence between and , or how different are between the training datasets of and . The large diversity between training datasets are very common in personalized federated learning settings, so the condition for performance improvement is usually met. Due to the symmetry, if we swap and , the same conclusion applies to the improved version of .

In multiple rounds of training, and obtained from the previous updates in each round are used as new initial classifiers to repeat the above process, which can further improve the performance of each client’s personalized model. It should be mentioned that, however, during this process, the disparity between the models of different clients will gradually diminish. According to Theorem 1, it results in a reduced model performance improvement until the model no longer improves. Continued training may cause the models of various customers to eventually converge and lose their individualized features, which is counterproductive to the goal of generating a high-performance tailored model.

PerFED-GAN typically has considerably more than two clients participating in federated learning and training. Our convergence analysis, on the other hand, should be equally applicable. This is due to the fact that, for a given client, all other clients can be viewed as a whole or an ensemble classifier. It is currently equivalent to the two in the preceding theoretical analysis. We can determine the convergence of PerFED-GAN by performing the above study for each client.

An obvious reason for the PerFED-GAN method is that when the models of different clients are highly diverse, the gap in information possessed by the models will be bigger. As a result, each client model’s information obtained from others contains more unknown knowledge to itself. As a result, more fresh knowledge leads to more significant performance increases. Furthermore, when the mutual learning between several models is significant, the knowledge variety between them almost disappears. At this point, mutual learning is unlikely to give any client with new information.

Iv-D Hyper-parameter

In Theorem 1, the left side of Formula (11) is increasing with , which means that, for a fixed , a given with generalization error bounded by and of size , a big enough size of dataset by is needed.

In the real application of PerFED-GAN, the size of the private training dataset of each client, that is, may be different. It can be seen from Formula (11) that when is larger, the required generated dataset needs to be larger to meet the requirements in Formula (11)

. The inequality holds, so we use the hyperparameter

to determine the size of the required generated dataset, that is

(23)

In this way, a reasonable hyperparameter can be set to provide clients with different private datasets with a sufficient number of generated samples, while avoiding the high communication cost caused by excessive sample generation requirements.

Iv-E PerFED-GAN with Differential Privacy

GANs may generate samples that are similar to the original learning samples, leading to the potential for data privacy leakage for generated data uploaded into PerFED-GAN. In the experimental section, we show that the quality of the generated samples can be reduced by limiting the GAN training rounds to reduce the risk of leaking the raw data, and the experiment results show that reducing the GAN training rounds results in a small degradation in model quality. In addition, a further measure to protect the privacy of raw data is the introduction of differential privacy GANs. Torkzadehmahani et al.[32] proposed a differential privacy conditional GAN, DP-CGAN, which can be used to generate data of specified classes and preserve the privacy of training data. Therefore, the GAN training module in PerFED-GAN can be replaced with DP-CGAN or a GAN training algorithm with similar DP technologies, which can effectively avoid generating samples to leak training data privacy.

V Experiments

In this section, we apply the PerFED-GAN method in a variety of federated learning settings to investigate the impact of various conditions and compare it with existing federated learning methods.

V-a Datasets

CIFAR10 and CIFAR100 [16]: The CIFAR10 dataset consists of 32x32 RGB images in 10 classes, with 6,000 images per class. There are 5,000 training images and 1,000 test images per class. The CIFAR100 is just like the CIFAR10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR100 are grouped into 20 superclasses. Each superclass consists of 5 classes, so there are 2,500 training images and 500 testing images per superclass. The classes in CIFAR10 are completely mutually exclusive, as well as in CIFAR100.

FEMNIST[3]: LEAF is a modular benchmark framework for learning in federated settings. FEMNIST is an image dataset for the classification task of LEAF. FEMNIST consists of 805,263 samples of handwritten characters (62 different classes including 10 digits, 26 uppercase and lowercase English characters) from 3550 users.

V-B Data Settings

IID Data Setting: In the IID context, it is typically assumed that the data distributions of different clients’ local datasets are similar and independent, i.e., they are independent and identically distributed.

Using the CIFAR10 dataset as an example, the method for constructing a private dataset under its IID settings is to randomly sample each client with a uniform distribution from the CIFAR10 training set. To avoid the interference of overlapping samples,

sampling without replacement is used to ensure that no overlapping samples exist among each client’s training datasets.

Non-IID Data Setting: The IID environment is an overly perfect federated learning environment. Existing federated machine learning approaches typically outperform in this context. However, in practice, the data assumption is far too ideal, particularly in personalized federated learning settings. In contrast to IID configuration, the distribution of the client’s local private data does not adhere to independent and identical distribution under Non-IID setting, and there may be significant discrepancies between them. As an example, considering the CIFAR10 dataset, in the case of Non-IID setting, some of the clients may have a very small number of samples of a specific category, while other clients have a large number of samples of that category. In extreme cases, there may even be pathological distribution differences, e.g., some clients have no samples of category A at all, while other clients have no samples of category B at all. Existing federated learning methods usually perform poorly when dealing with Non-IID situations, the convergence speed is greatly reduced, and the final results are poor. Therefore, the setting of Non-IID data is generally considered to be more challenging than the setting of IID data, and this is also a problem that has to be faced in personalized federated learning.

V-C Model Settings

The PerFED-GAN method enables clients to design neural networks with different architecture independently. We use convolution neural networks (CNNs) with different architectures as the personalized model for each client. The model architecture of each client is a randomly generated 2-layer or 3-layer convolutional neural network, using ReLU as the activation function with a following

2×2max pooling layer. The number of filters in the convolutional layer is randomly selected from {20, 24, 32, 40, 48, 56, 80, 96} with an ascending order. The global average pooling layer and fully connected layer (dense layer) are insert before the softmax layer of each network. In our experiment, 100 clients participate in federated learning training. Table I shows the design parameters of 10 of the network structures as examples.

Model
1st
conv layer
 
2nd
conv layer
 
3rd
conv layer
1 24 3×3 filters 40 3×3 filters none
2 24 3×3 filters 32 3×3 filters 56 3×3 filters
3 20 3×3 filters 32 3×3 filters none
4 24 3×3 filters 40 3×3 filters 56 3×3 filters
5 20 3×3 filters 32 3×3 filters 64 3×3 filters
6 24 3×3 filters 32 3×3 filters 64 3×3 filters
7 32 3×3 filters 32 3×3 filters none
8 40 3×3 filters 56 3×3 filters none
9 32 3×3 filters 48 3×3 filters none
10 48 3×3 filters 56 3×3 filters 96 3×3 filters
TABLE I: Network Architectures

V-D GAN Settings

In each set of experiments, we equip each client with a corresponding GAN model to generate samples representing the characteristics of their data distribution. The structures of these models are independent of each other and do not need to be shared with other clients. In the GAN model of a client, the discriminant model is the client’s own demand model, and the generative model is similar in structure to the discriminant model.

V-E Performance Evaluation

The motivation of federated learning is to obtain a higher-quality model than the local training does, so relative test accuracy (RTA) can be used to measure the improvement in model quality. For example, the test classification accuracy of the local model is 60%, and the test accuracy of the model obtained through federated learning on the same test set is 80%, then the relative test accuracy of the federated learning method for the client is 80%/60% 1.33. In our experiments, the performance of a federated learning method is evaluated by mean relative test accuracy(MRTA), which is the average of the relative test accuracy of the federated learning models of all clients

(24)

where the is the number of clients, and is the relative test accuracy of Client .

V-F CIFAR Dataset Experiments

In the CIFAR experiments, we divided the datasets of CIFAR10 and CIFAR100 respectively. The divisions are carried out in two ways, i.e., the above-mentioned IID setting and Non-IID setting. In the experiments, for a client, the local training dataset size is 500. Under IID settings, the training sets of CIFAR10 and CIFAR100 are randomly and evenly distributed to 100 clients. For Non-IID setting, the training dataset of a client is constructed as follows: randomly select 60% of all categories (6 categories for CIFAR10, 60 categories for CIFAR100), and uniformly random samples from these categories 450 samples, and the remaining 50 samples are randomly selected from the remaining 40% of the samples.

The result of this configuration is that each client has a relatively large number of training samples for 60% of the categories, while the number of training samples for the remaining 40% of the categories is insufficient (or even missing). It should be noted that in both IID setting and Non-IID setting, since sampling without replacement is used, all clients are ensured that there are no overlapping training samples.

For the test set used for evaluation, in the case of IID, it can be considered that the task demand tendencies of all clients are the same because their training dataset distributions are similar. So we use all the samples in the test set of CIFAR10 and CIFAR100 to test the model of each client. In the case of Non-IID, the needs of each customer are personalized, which is reflected by the differences in the distributions of their training datasets. Therefore, in order to test the performance of the personalized model, its test dataset distribution should be similar to that of the training dataset. This means that for a specific client, all test samples cannot be used directly, but some types of test samples are reduced to make the test dataset and the distribution of the training dataset is consistent.

In addition, in the experiments, we set the hyperparameter as 5, that is, each client can obtain a generated dataset with 5 times the number of local training samples.

(a) IID setting
(b) Non-IID setting
Fig. 2: Results of CIFAR10 experiments.
(a) IID setting
(b) Non-IID setting
Fig. 3: Results of CIFAR100 experiments.

We respectively compared the accuracies of the local training models of CIFAR10 and CIFAR100 under IID and Non-IID settings, and the results are shown in Fig. 2 and Fig. 3. We find that the PerFED-GAN method improves the test accuracies of client models by 8%-33% in the IID setting with an average of 17.2% for CIFAR10 experiments. It shows that PerFED-GAN can bring significant performance improvement of client models even when the model differences are small. For CIFAR100 experiments, the performances of PerFED-GAN model relative to the local models have been improved by 15%-41% under IID setting, with an average of 24%. This improvement is more obvious than that in the CIFAR10 experiment due to the fact that the number of samples available for training in each category is fewer, so the benefits of federated learning are greater. For the Non-IID data setting, the improvement by PerFED-GAN can be more significant due to the greater diversity among different client models since their training dataset come from different distributions. In our experiments, the PerFED-GAN achieves a relative improvement in the average test accuracy of 35% for CIFAR10 and 49% for CIFAR100. For each client, the improvement ranges from 14% to 67% for CIFAR10, and 22%-85% for CIFAR100. This result is significantly better than the IID settings, indicating that PerFED-GAN is less damaged by statistical heterogeneity. Therefore, PerFED-GAN is more suitable for personalized federated learning scenarios where the client tasks are more personalized and the data distribution is more different.

V-G FEMNIST Dataset Experiment

Different from the CIFAR dataset which is a standard benchmark for the general machine learning, FEMNIST dataset is customized for federated learning settings. We selected 100 users with the largest number of samples from the 3550 users of FEMNIST as clients. 40% of the samples of each client are used as training data, and the rest are test data. Each client needs to train a model to recognize the user’s handwritten characters of 62 classes.

The training dataset setting methods in IID data settings and Non-IID settings are similar to the CIFAR experiments. The model architectures of the clients are the same as that used in the CIFAR experiments, and the local training parameters are also tuned in a similar way as the CIFAR experiments. In this experiment, we set the hyper-parameter as 5, and the results are shown in Fig. 4. We find that the PerFED-GAN can improve the test accuracies of almost all clients with an average of 17%, within range 9% to 31% for IID setting, and an average of 43% within range 21%-80% for Non-IID setting.

(a) IID setting
(b) Non-IID setting
Fig. 4: Results of FEMNIST experiments.

V-H Hyper-parameter

According to theoretical analysis, PerFED-GAN needs enough generated samples for each client, and the number of samples required increases with the number of local private training samples. The meaning of the hyperparameter is the ratio of the number of generated samples provided to the number of local training samples. A too small value may weaken the training effect of PerFED-GAN, while a too large value may bring excessive communication costs. In this part, we test the effect of different value settings on the effect of PerFED-GAN method.

We repeat the CIFAR100 experiments with different values of and record the changes of performance of PerFED-GAN. The results are shown in Fig.5.

Fig. 5: v.s. MRTA on CIFAR100 experiments.

The results show that when the value of increases, PerFED-GAN can indeed provide higher performance, but its impact on the PerFED-GAN is not monotonous. This benefit decreases as increases. At this time, the number of generated samples required increases in proportion to the . It is also an increase in communication overhead. For example, when the value is 32, it means that each client needs 32 times the generated samples of its local dataset, which greatly increases the communication cost, and the impovement is only 2% increase compared to the result when is 8, but the communication overhead is increased by 300%.

In practical applications, communication conditions and costs should be considered, and an appropriate hyperparameter value should be selected to make a trade-off between the model performance gain and communication cost brought by PerFED-GAN.

V-I Round of GAN Training

Fig. 6: Round of GAN Training v.s. MRTA on CIFAR100 experiments.

From theoretical analysis, it can be known that when the samples generated by GAN are more similar to the training samples, the more accurately the generated samples can express the information of the training samples. As a result, other client models that use these generated samples can learn the information more accurately, which brings more significant performance gains to the federated learning model. However, high-quality generated samples may also cause the leakage of training samples. Avoiding data privacy leakage is the basic premise of federated learning. Therefore, we try to control the quality of generated samples of GANs by training rounds. When the training rounds are insufficient, the qualities of generated samples are lower, which can better protect the privacy of clients’ local data. We investigate whether using these low-quality generated samples with PerFED-GAN can result in sufficient federated learning performance gains.

We repeated the experimental process on CIFAR100, with different GAN training rounds to detect its impact on the final performance of PerFED-GAN, using MRTA for the performance evaluation. The experiment results are shown in Fig. 6. The experimental results show that higher-quality generated samples obtained after more rounds of GAN training can indeed improve PerFED-GAN performance, but even if the GANs obtained from less training rounds are used to generate samples, when the qualities of the samples generated at this time are very low and the risks of privacy leakage are low, they can still achieve better results. As a result, the effect is not much different from that of more rounds.

Therefore, in practical applications, PerFED-GAN only needs to use lower-quality GANs to generate samples, which protects local private data and reduces the computational overhead and time cost in local training of GANs.

V-J DP-CGAN Experiments

In addition to directly controlling the number of GAN training rounds to reduce the risk of generating data leakage of training data privacy, we also conduct experiments in which using DP-CGAN for GAN training. According to the study in [5], DP-CGAN can effectively reduce the risk of exposing training data privacy while maintaining high generation quality.

(a) IID setting
(b) Non-IID setting
Fig. 7: DP-CGAN Results of CIFAR10 experiments.

We repeated the experimental process of CIFAR10 and CIFAR100, but replaced the GAN training part of the original algorithm with DP-CGAN, using , to detect the impact of the replacement algorithm on the final performance of PerFED-GAN, using MRTA for performance evaluation. The experiment results in Fig. 7 and Fig. 8 show that, after the application of DP-CGAN with stronger privacy protection ability, the average accuracy of the personalized federated learning of PerFED-GAN decreases, but the magnitude is small, indicating that the proposed algorithm brings a small performance loss cost with stronger data privacy security protection.

(a) IID setting
(b) Non-IID setting
Fig. 8: DP-CGAN Results of CIFAR100 experiments.

V-K Compare with Other Federated Learning Methods

PerFED-GAN is a personalized federated learning method that tries to be compatible with heterogeneous models which have different network architectures. Therefore, there are few federated learning methods to compare with it directly.

We use personalized FedAvg method [5] for comparison, and it fine-tunes the global federated model to each client to perform personalized tasks. At the same time, we also try to apply the above ideas to the FedProx method [18], which is considered to perform better in the case of Non-IID data distribution. For the methods compatible with different network structures, we choose the FedMD method [17] for comparison. This method uses model distillation and a proxy dataset to align different client models on the proxy dataset to achieve personalized federated learning.

We use the data configuration of CIFAR100 experiments in this subsection. Considering that the personalized FedAvg and FedProx methods cannot support models with different architectures, we use 100 clients having neural network models with the same architecture and different parameter values for comparison. In the experiment for comparing with FedMD, we adopt the same experiment setup as our CIFAR100 experiments, that is, we selected 100 clients with different architectures of neural networks.

We tried different hyperparameter settings for better performance in personalized FedAvg and FedProx experiments. In FedMD experiments, we use the training settings suggested by [17] since it is similar to our experimental conditions, i.e., using the training set of CIFAR10 as the public dataset and using 5000 samples in each round for model alignment.

(a) IID setting
(b) Non-IID setting
Fig. 9: Method comparisons for same model architecture. Algorithms FedAvg-based and FedProx-based methods are only compatible with federated learning scenarios where all clients have the same model architecture.
(a) IID setting
(b) Non-IID setting
Fig. 10: Method comparisons for different model architectures. FedMD and PerFED-GAN are compatible with federated learning scenarios where clients have different model architectures.

The results of comparing FedAvg and FedProx with PerFED-GAN are shown in Fig.9. In IID settings, FedAvg with fine-tuning achieves the best performance. PerFED-GAN shows significantly better performance than the comparison methods at the beginning of the communication rounds. Although PerFED-GAN fails to achieve the best performance after 100 rounds of communication, this is expected. Because PerFED-GAN is mainly designed for personalized federated learning with large differences in data distribution. In Non-IID settings, FedAvg and FedProx both need more communication rounds to stabilize to good performance. At this time, PerFED-GAN shows great advantages over other methods in Fig.9(b), leads 12% to the best performance of the comparison methods. Moreover, it achieves the best performance results with fewer communication rounds. In terms of time consumption, the PerFED-GAN method needs to train the GAN and train on the aggregated dataset thus results in more time overhead than the parameter aggregation methods such as FedAvg and FedProx for in-client training. However, supporting personalized federated training of heterogeneous model architectures and better performance for Non-IID scenarios is the most important advantage of the PerFED-GAN algorithm, and it is also the main motivation for this paper.

The results of comparing FedMD with PerFED-GAN are shown in Fig. 10

. In both IID and Non-IID settings, FedMD does not reach the performance of PerFED-GAN. PerFED-GAN leads 21% in IID settings and 42% in Non-IID settings, and in the iterative process, FedMD even experienced performance degradation. Federated distillation algorithm FedMD requires an additional public dataset. In each round, each client needs to train on the public dataset to align the output probability vectors of different client models. Therefore, the additional time consumption mainly depends on the size of the public dataset used for federated distillation. In this set of FedMD experiments, the size of public dataset is 10 times the size of local private dataset. Compared with this, the training of the GAN part in the PerFED-GAN experiments in this paper only needs to be performed on the local dataset. The in-processing training uses 5 times the number of generated samples compared to the local samples, which is lower than the size of the public dataset in FedMD. From the experimental results, the quality of the personalized model of PerFED-GAN is significantly better than that of the federated distillation method FedMD. Furthermore, it should be pointed out that PerFED-GAN does not need to rely on additional datasets. In the federated distillation algorithm, a huge public dataset is necessary, and the dataset needs to be highly related to the task of federated learning. The availability of this public dataset limits the scope of application of federated distillation method.

As a result, when compared to other methods, PerFED-GAN is not only compatible with different network structure designs, but it is also suitable for personalized federated learning scenarios with large differences in client data distributions. The model performances of our method outperforms comparison methods especially in Non-IID data settings, and the proposed method requires fewer rounds of communication can be completed more quickly.

Vi Conclusion

In this paper, we propose the PerFED-GAN federated learning method, which is compatible with heterogeneous model architectures. PerFED-GAN is more suitable for personalized federated learning settings with higher heterogeneity and personalized needs than existing methods. Furthermore, PerFED-GAN protects not only the private data of all clients in federated learning settings, but also their private models and training strategies. PerFED-GAN enables clients to share multi-party knowledge to improve the performance of their local models. It produces promising results on Non-IID datasets for heterogeneous models with different architectures, which are more practical but typically difficult to handle in existing federated learning methods.

References

  • [1] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary (2019) Federated learning with personalization layers. arXiv preprint arXiv:1912.00818. Cited by: §I, §II-A.
  • [2] A. Blum and T. Mitchell (1998) Combining labeled and unlabeled data with co-training. In

    The Eleventh Annual Conference on Computational Learning Theory

    ,
    pp. 92–100. Cited by: §II-B.
  • [3] S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečnỳ, H. B. McMahan, V. Smith, and A. Talwalkar (2018) Leaf: a benchmark for federated settings. arXiv preprint arXiv:1812.01097. Cited by: §V-A.
  • [4] Y. Chen, F. Luo, T. Li, T. Xiang, Z. Liu, and J. Li (2020) A training-integrity privacy-preserving federated learning scheme with trusted execution environment. Information Sciences 522, pp. 69–79. Cited by: §II-A.
  • [5] Y. Deng, M. M. Kamani, and M. Mahdavi (2020) Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461. Cited by: §II-A, §V-K.
  • [6] D. Gao, Y. Liu, A. Huang, C. Ju, H. Yu, and Q. Yang (2019)

    Privacy-preserving heterogeneous federated transfer learning

    .
    In 2019 IEEE International Conference on Big Data (Big Data), Vol. , pp. 2552–2559. External Links: Document Cited by: §II-A.
  • [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. Advances in neural information processing systems 27. Cited by: §II-C.
  • [8] N. Guha, A. Talwalkar, and V. Smith (2019) One-shot federated learning. arXiv preprint arXiv:1902.11175. Cited by: §II-A.
  • [9] F. Hanzely and P. Richtárik (2020) Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516. Cited by: §II-A.
  • [10] R. Hu, Y. Guo, H. Li, Q. Pei, and Y. Gong (2020) Personalized federated learning with differential privacy. IEEE Internet of Things Journal 7 (10), pp. 9530–9539. Cited by: §II-A.
  • [11] Y. Huang, L. Chu, Z. Zhou, L. Wang, J. Liu, J. Pei, and Y. Zhang (2021) Personalized cross-silo federated learning on non-iid data. In AAAI Conference on Artificial Intelligence, Vol. 35, pp. 7865–7873. Cited by: §II-A.
  • [12] Y. Jiang, J. Konečnỳ, K. Rush, and S. Kannan (2019) Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488. Cited by: §I, §II-A.
  • [13] J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, and M. Guizani (2020) Reliable federated learning for mobile networks. IEEE Wireless Communications 27 (2), pp. 72–80. Cited by: §I.
  • [14] L. U. Khan, I. Yaqoob, N. H. Tran, S. M. A. Kazmi, T. N. Dang, and C. S. Hong (2020) Edge-computing-enabled smart cities: a comprehensive survey. IEEE Internet of Things Journal 7 (10), pp. 10200–10232. External Links: Document Cited by: §I.
  • [15] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik (2016) Federated optimization: distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527. Cited by: §I.
  • [16] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. External Links: Link Cited by: §V-A.
  • [17] D. Li and J. Wang (2019) FedMD: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581. Cited by: §II-A, §V-K, §V-K.
  • [18] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith (2018) Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127. Cited by: §II-A, §V-K.
  • [19] Y. Liu, Y. Kang, C. Xing, T. Chen, and Q. Yang (2020) A secure federated transfer learning framework. IEEE Intelligent Systems 35 (4), pp. 70–82. Cited by: §II-A.
  • [20] Y. Liu, J. James, J. Kang, D. Niyato, and S. Zhang (2020) Privacy-preserving traffic flow prediction: a federated learning approach. IEEE Internet of Things Journal 7 (8), pp. 7751–7763. Cited by: §I, §I.
  • [21] Y. Liu, X. Yuan, Z. Xiong, J. Kang, X. Wang, and D. Niyato (2020) Federated learning for 6g communications: challenges, methods, and future directions. China Communications 17 (9), pp. 105–118. Cited by: §I.
  • [22] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273–1282. Cited by: §I, §II-A.
  • [23] F. Mo and H. Haddadi (2019) Efficient and private federated learning using tee. In Proc. EuroSys Conf., Dresden, Germany, Cited by: §II-A.
  • [24] V. Mothukuri, R. M. Parizi, S. Pouriyeh, Y. Huang, A. Dehghantanha, and G. Srivastava (2021) A survey on security and privacy of federated learning. Future Generation Computer Systems 115, pp. 619–640. Cited by: §I.
  • [25] P. Kairouz, H. B. McMahan, B. Avent, et al. (2019) Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. Cited by: §I.
  • [26] K. Qi and C. Yang (2020) Popularity prediction with federated learning for proactive caching at wireless edge. In 2020 IEEE Wireless Communications and Networking Conference (WCNC), Vol. , pp. 1–6. External Links: Document Cited by: §I.
  • [27] H. Ren, J. Deng, and X. Xie (2021) GRNN: generative regression neural network–a data leakage attack for federated learning. arXiv preprint arXiv:2105.00529. Cited by: §I.
  • [28] R. Sánchez-Corcuera, A. Nuñez-Marcos, J. Sesma-Solance, A. Bilbao-Jayo, R. Mulero, U. Zulaika, G. Azkune, and A. Almeida (2019) Smart cities survey: technologies, application domains and challenges for the cities of the future. International Journal of Distributed Sensor Networks 15 (6), pp. 1550147719853984. Cited by: §I.
  • [29] V. Smith, C. Chiang, M. Sanjabi, and A. S. Talwalkar (2017) Federated multi-task learning. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . Cited by: §II-A.
  • [30] M. N. Soe (2020) Homomorphic encryption (he) enabled federated learning. External Links: Link Cited by: §II-A.
  • [31] (2017) The internet of things: a movement, not a market. Technical report IHS-Markit. External Links: Link Cited by: §I.
  • [32] R. Torkzadehmahani, P. Kairouz, and B. Paten (2019-06) DP-cgan: differentially private synthetic data and label generation. In

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

    ,
    Cited by: §IV-E.
  • [33] K. Wang, R. Mathews, C. Kiddon, H. Eichner, F. Beaufays, and D. Ramage (2019) Federated evaluation of on-device personalization. arXiv preprint arXiv:1910.10252. Cited by: §II-A.
  • [34] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan (2019) Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications 37 (6), pp. 1205–1221. Cited by: §II-A.
  • [35] W. Wang and Z. Zhou (2007) Analyzing co-training style algorithms. In European conference on machine learning, pp. 454–465. Cited by: §II-B, §IV-A.
  • [36] W. Wang and Z. Zhou (2013) Co-training with insufficient views. In Asian Conference on Machine Learning, pp. 467–482. Cited by: §II-B.
  • [37] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor (2020) Federated learning with differential privacy: algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15, pp. 3454–3469. Cited by: §II-A.
  • [38] B. Xin, W. Yang, Y. Geng, S. Chen, S. Wang, and L. Huang (2020) Private fl-gan: differential privacy synthetic data generation based on federated learning. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 2927–2931. External Links: Document Cited by: §II-A.
  • [39] B. Yin, H. Yin, Y. Wu, and Z. Jiang (2020)

    FDC: a secure federated deep learning mechanism for data collaborations in the internet of things

    .
    IEEE Internet of Things Journal 7 (7), pp. 6348–6359. Cited by: §II-A.
  • [40] Z. Zhou and M. Li (2005) Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering 17 (11), pp. 1529–1541. Cited by: §II-B.
  • [41] H. Zhu, Z. Li, M. Cheah, and R. S. M. Goh (2020) Privacy-preserving weighted federated learning within oracle-aided mpc framework. arXiv preprint arXiv:2003.07630. Cited by: §II-A.