User-Centric Federated Learning

10/19/2021
by   Mohamad Mestoukirdi, et al.
0

Data heterogeneity across participating devices poses one of the main challenges in federated learning as it has been shown to greatly hamper its convergence time and generalization capabilities. In this work, we address this limitation by enabling personalization using multiple user-centric aggregation rules at the parameter server. Our approach potentially produces a personalized model for each user at the cost of some extra downlink communication overhead. To strike a trade-off between personalization and communication efficiency, we propose a broadcast protocol that limits the number of personalized streams while retaining the essential advantages of our learning scheme. Through simulation results, our approach is shown to enjoy higher personalization capabilities, faster convergence, and better communication efficiency compared to other competing baseline solutions.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

01/27/2022

Achieving Personalized Federated Learning with Sparse Local Models

Federated learning (FL) is vulnerable to heterogeneously distributed dat...
08/07/2020

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

Federated learning is a popular distributed machine learning paradigm wi...
04/26/2021

Communication-Efficient and Personalized Federated Lottery Ticket Learning

The lottery ticket hypothesis (LTH) claims that a deep neural network (i...
06/01/2022

DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training

Personalized federated learning is proposed to handle the data heterogen...
10/06/2021

Federated Learning via Plurality Vote

Federated learning allows collaborative workers to solve a machine learn...
10/03/2019

SAFA: a Semi-Asynchronous Protocol for Fast Federated Learning with Low Overhead

Federated learning (FL) has attracted increasing attention as a promisin...
05/05/2021

Density-Aware Federated Imitation Learning for Connected and Automated Vehicles with Unsignalized Intersection

Intelligent Transportation System (ITS) has become one of the essential ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Federated learning [17] has seen great success, being able to solve distributed learning problems in a communication-efficient and privacy-preserving manner. Specifically, federated learning provides to clients (e.g. smartphones, IoT devices, and organizations) the possibility of collaboratively train a model under the orchestration of a parameter server (PS) by iteratively aggregating locally optimized models and without off-loading local data [12]. The original aggregation policy was implemented by Federated Averaging (FedAvg) [17], has been devised under the assumption that clients’ local datasets are statistically identical, an assumption that is hardly met in practice. In fact, clients typically store datasets that are statistically heterogeneous and different in size [20], and are mainly interested in learning models that generalize well over their local data distribution through collaboration. Generally speaking, FedAvg exhibits slow convergence and poor generalization capabilities in such non-IID setting [15]. To address these limitations, a large body of literature deals with personalization as a technique to reduce the detrimental effect of non-IID data. A straightforward solution consists in producing adapted models at a device scale by local fine-tuning procedures. Borrowing ideas from Model Agnostic Meta-Learning (MAML) [9], federated learning can be exploited in order to find a launch model that can be later personalized at each device using few gradient iterations [8, 11]. Alternatively, local adaptation can be obtained by tuning only the last layer of a globally trained model [2]

or by interpolating between a global model and locally trained ones

[7, 10]. However, these methods can fail at producing models with an acceptable generalization performance even for synthetic datasets [5]. Adaptation can also be obtained leveraging user data similarity to personalize the training procedure. For instance, a Mixture of Experts formulation has been considered to learn a personalized mixing of the outputs of a commonly trained set of models [19]. Similarly, [1]

proposed a distributed Expectation-Maximization (EM) algorithm concurrently converges to a set of shared hypotheses and a personalized linear combination of them at each device. Furthermore,

[22] proposed a personalized aggregation rule at the user side based on the validation accuracy of the locally trained models at the different devices. In order to be applicable, these techniques need to strike a good balance between communication overhead and the amount of personalization in the system. In fact, if on one hand, the expressiveness of the mixture is proportional to the number of mixed components; on the other, the communication load is linear in this quantity. Clustered Federated Learning (CFL) measures the similarity among the model updates during the optimization process in order to lump together users in homogeneous groups. For example, [4, 20] proposed a hierarchical strategy in which the original set of users is gradually divided into smaller groups and, for each group, the federated learning algorithm is branched in a new decoupled optimization problem.

In this work, we propose a different approach to achieve personalization by allowing multiple user-centric aggregation strategies at the PS. The mixing strategies account for the existence of heterogeneous clients in the system and exploit estimates of the statistical similarity among clients that are obtained at the beginning of the federated learning procedure. Furthermore, the number of distinct aggregation rules — also termed personalized streams — can be fixed in order to strike a good trade-off between communication and learning efficiency.

In particular, the contributions of this work are:

  1. We propose a user-centric aggregation rule to personalize users’ local models. This rule exploits a novel similarity score that quantifies the discrepancy between individual user data distributions. Different from previous algorithms based on user clustering [20, 4], our approach enables collaboration across all the nodes during training and, as a result, outperforms the above techniques, especially when clear clusters of users do not exist. Conversely to [1], personalization is performed at the PS and therefore without the additional cost of transmitting multiple models to each client.

  2. Leveraging results from domain adaptation theory, we provide an upper bound on the risk w.r.t. the local data distribution of the personalized models obtained by our aggregation strategy. The result is used to obtain insights on how to determine the degree of collaboration among the devices.

  3. We propose a heuristic strategy to compute the mixing coefficients for the personalized aggregation without accounting for the communication overhead. Then, in order to limit the communication burden introduced by the personalized aggregation, we propose to limit the number of personalized streams using the centroids obtained by clustering the mixing coefficient vectors.

  4. We provide simulation results for different scenarios and demonstrate that our approach exhibits faster convergence, higher personalization capabilities, and communication efficiency compared to other popular baseline algorithms.

Ii Learning with heterogeneous data sources

In this section, we provide theoretical guarantees for learners that combine data from heterogeneous data distributions. The set-up mirrors the one of personalized federated learning and the results are instrumental to derive our user-centric aggregation rule. In the following, we limit our analysis to the discrepancy distance (4) but it can be readily extended to other divergences [18].

In the federated learning setting, the weighted combination of the empirical loss terms of the collaborating devices represents the customary training objective. Namely, in a distributed system with nodes, each endowed with a dataset of IID samples from a local distribution , the goal is to find a predictor from a hypothesis class that minimizes

(1)

where

is a loss function and

is a weighting scheme. In case of identically distributed local datasets, the typical weighting vector is

, the relative fraction of data points stored at each device. This particular choice minimizes the variance of the aggregated empirical risk, which is also an unbiased estimate of the local risk at each node in this scenario. However, in the case of heterogeneous local distributions, the minimizer of

-weighted risk may transfer poorly to certain devices whose target distribution differs from the mixture . Furthermore, it may not exists a single weighting strategy that yields a universal predictor with satisfactory performance for all participating devices. To address the above limitation of a universal model, personalized federated learning allows adapting the learned solution at each device. In order to better understand the potential benefits and drawbacks coming from the collaboration with statistically similar but not identical devices, let us consider the point of view of a generic node that has the freedom of choosing the degree of collaboration with the other devices in the distributed system. Namely, identifying the degree of collaboration between node and the rest of users by the weighting vector (where defines how much node relies on data from user ), we define the personalized objective for user

(2)

and the resulting personalized model

(3)

We now seek an answer to: “What’s the proper choice of in order to obtain a personalized model that performs well on the target distribution ?”. This question is deeply tied to the problem of domain adaptation, in which the goal is to successfully aggregate multiple data sources in order to produce a model that transfers positively to a different and possibly unknown target domain. In our context, the dataset is made of data points drawn from the target distribution and the other devices’ datasets provide samples from the sources . Leveraging results from domain adaptation theory [3], we provide learning guarantees on the performance of the personalized model to gauge the effect of collaboration that we later use to devise the weights for the user-centric aggregation rules.

In order to avoid negative transfer, it is crucial to upper bound the performance of the predictor w.r.t. to the target task. The discrepancy distance introduced in [16] provides a measure of similarity between learning tasks that can be used to this end. For a functional class and two distributions on , the discrepancy distance is defined as

(4)

where we streamlined notation denoting by . For bounded and symmetric loss functions that satisfy the triangular inequality, the previous quantity allows obtaining the following inequality

where . We can exploit the inequality to obtain the following risk guarantee for w.r.t the true minimzer of the risk for the distribution .

Theorem 1.

For a symmetric and -bounded range loss function that satisfies the triangular inequality, w.p. the predictor satisfies

where is the VC-dimension of the function space resulting from the composition of and and .

Sketch of Proof.

The proof works by bounding the population risk of (3) w.r.t. the local measure and, subsequently, the estimation error of the weighted empirical risk minimizer. Full details are provided in [18]. ∎

The theorem highlights that a fruitful collaboration should strike a balance between the bias terms due to dissimilarity between the local distribution and the risk estimation gains provided by the data points of other nodes. Analytically minimizing the upper bounds seems an appealing solution; however, the divergence terms are difficult to compute, especially under the privacy constraints that federated learning imposes. For this reason, in the following, we consider a heuristic method based on the similarity of the readily available users’ model updates to estimate the collaboration coefficients.

Iii User-centric aggregation

Fig. 1: Personalized Federated Learning with user-centric aggregates at round .

For a suitable hypothesis class parametrized by , federated learning approaches use an iterative procedure to minimize the aggregate loss (1) with . At each round , the PS broadcasts the parameter vector and then combines the locally optimized models by the clients according to the following aggregation rule

As mentioned in Sec. II, this aggregation rule has two shortcomings: it does not take into account the data heterogeneity across users, and it is bounded to produce a single solution. For this reason, we propose a user-centric model aggregation scheme that takes into account the data heterogeneity across the different nodes participating in training and aims at neutralizing the bias induced by a universal model. Our proposal generalizes the naïve aggregation of FedAvg, by assigning a unique set of mixing coefficients to each user , and consequently, a user-specific model aggregation at the PS side. Namely, at the PS side, the following set of user-centric aggregation steps are performed

(5)

where now, is the locally optimized model at node starting from , and is the user-centric aggregated model for user at communication round .

As we elaborate next, the mixing coefficients are heuristically defined based on a distribution similarity metric and the dataset size ratios. These coefficients are calculated before the start of federated training. The similarity score we propose is designed to favor collaboration among similar users and takes into account the relative dataset sizes, as more intelligence can be harvested from clients with larger data availability. Using these user-centric aggregation rules, each node ends up with its own personalized model that yields better generalization for the local data distribution. It is worth noting that the user-centric aggregation rule does not produce a minimizer of the user-centric aggregate loss given by (2). At each round, the PS aggregates model updates computed starting from a different set of parameters. Nonetheless, we find it to be a good approximation of the true update since personalized models for similar data sources tend to propagate in a close neighborhood. The aggregation in [22] capitalizes on the same intuition.

Iii-a Computing the collaboration coefficients

Computing the discrepancy distance (4) can be challenging in high-dimension, especially under the communication and privacy constraints imposed by federated learning. For this reason, we propose to compute the mixing coefficient based on the relative dataset sizes and the distribution similarity metric given by

where the quality of the approximation depends on the number of samples and . The mixing coefficients for user are then set to the following normalized exponential function

(6)

The mixture coefficients are calculated at the PS during a special round prior to federated training. During this round, the PS broadcasts a common model denoted to the users, which compute the full gradient on their local datasets. At the same time, each node locally estimates the value partitioning the local data in batches of size and computing

(7)

where is an estimate of the gradient variance computed over local datasets sampled from the same target distribution. Once all the necessary quantities are computed, they are uploaded to the PS, which proceeds to calculate the mixture coefficients and initiates the federated training using the custom aggregation scheme given by (III-A). Note that the proposed heuristic embodies the intuition provided by Th. 1. In fact, in the case of homogeneous users, it falls back to the standard FedAvg aggregation rule, while in the case of node has an infinite amount of data it degenerates to the local learning rule which is optimal in that case.

Iii-B Reducing the communication load

A full-fledged personalization by the means of the user-centric aggregation rule (III-A) would introduce a -fold increase in communication load during the downlink phase as the original broadcast transmission is replaced by unicast ones. Although from a learning perspective the user-centric learning scheme is beneficial, it is also possible to consider overall system performance from a learning-communication trade-off point of view. The intuition is that, for small discrepancies between the user data distributions, the same model transfer positively to statistically similar devices. In order to strike a suitable trade-off between learning accuracy and communication overhead we hereby propose to adaptively limit the number of personalized downlink streams. In particular, for a number of personalized models , we run a -means clustering scheme with over the set of collaboration vectors and we select the centroids to implement the personalized streams. We then proceed to replace the unicast transmission with group broadcast ones, in which all users belonging to the same cluster receive the same personalized model . Choosing the right value for the number of personalized streams is critical in order to save communication bandwidth but at the same time obtain satisfactory personalization capabilities. It can be experimentally shown that clustering quality indicators such as the Silhouette score over the user-centric weights can be used to guide the search for the suitable number of streams [18].

Iv Experiments

(a) EMNIST + label shift
(b) EMNIST + label and covariate shift
(c) CIFAR10 + concept shift
Fig. 2: Evolution of the average validation accuracy in the three simulation scenarios.

We now provide a series of experiments to showcase the personalization capabilities and communication efficiency of the proposed algorithm.

Iv-a Set-up

In our simulation we consider a handwritten character/digit recognition task using the EMNIST dataset [6]

and an image classification task using the CIFAR-10 dataset

[13]. Data heterogeneity is induced by splitting and transforming the dataset in a different fashion across the group of devices. In particular, we analyze three different scenarios:

  • Character/digit recognition with user-dependent label shift in which 10k EMNIST data points are split across 20 users according to their labels. The label distribution follows a Dirichlet distribution with parameter 0.4, as in [1, 21].

  • Character/digit recognition with user-dependent label shift and covariate shift in which 100k samples from the EMNIST dataset are partitioned across 100 users each with a different label distribution, as in the previous scenario. Additionally, users are clustered in 4 group and at each group images are rotated of respectively.

  • Image classification with user-dependent concept shift in which the CIFAR-10 dataset is distributed across 20 users which are grouped in 4 clusters, for each group we apply a different random label permutation.

For each scenario, we aim at solving the task at hand by leveraging the distributed and heterogeneous datasets. We compare our algorithm against four different baselines: FedAvg, local learning, CFL [20] and FedFomo [22]

. In all scenarios and for all algorithms, we train a LeNet-5 convolutional neural network

[14]

using a stochastic gradient descent optimizer with a fixed learning rate

and momentum .

Local FedAvg Oracle CFL [20] FedFOMO [22] Proposed
EMNIST label shift 58.8 68.9 - 70.3 70.0 73.2 for
EMNIST covariate and label shift 56.0 67.5 77.4 76.1 73.6 76.4 for
CIFAR concept shift 35.7 19.6 49.1 48.6 45.5 49.1 for
TABLE I: Worst user performance averaged over 5 experiments.

Iv-B Personalization performance

We now report the average accuracy over 5 trials attained by the different approaches. We also study the personalization performance of our algorithm when we restrain the overall number of personalized streams, namely the number of personalized models that are concurrently learned. In Fig.1(a) we report the average validation accuracy in the EMNIST label shift scenario. We first notice that in the case of label shift, harvesting intelligence from the datasets of other users amounts to a large performance gain compared to the localized learning strategy. This indicates that data heterogeneity is moderate and collaboration is fruitful. Nonetheless, personalization can still provide gains compared to FedAvg. Our solution yields a validation accuracy which is increasing in the number of personalized streams. Allowing maximum personalization, namely a different model at each user, we obtain a 3% gain in the average accuracy compared to FedAvg. CFL is not able to transfer intelligence among different groups of users and attains performance similar to the FedAvg. This behavior showcases the importance of soft clustering compared to the hard one for the task at hand. We find that FedFOMO, despite excelling in the case of strong statistical heterogeneity, fails to harvest intelligence in the label shift scenario. In Fig.1(b)

we report the personalization performance for the second scenario. In this case, we also consider the oracle baseline, which corresponds to running 4 different FedAvg instances, one for each cluster of users, as if the 4 groups of users were known beforehand. Different from the previous scenario, the additional shift in the covariate space renders personalization necessary in order to attain satisfactory performance. In fact, the oracle training largely outperforms FedAvg. Furthermore, as expected, our algorithm matches the oracle final performance when the number of personalized streams is 4 or more. Also, CLF and FedFOMO are able to correctly identify the 4 clusters. However, the former exhibits slower convergence due to the hierarchical clustering over time while the latter plateaus to a lower average accuracy level. We turn now to the more challenging CIFAR-10 image classification task. In Fig.

1(c) we report the average accuracy of the proposed solution for a varying number of personalized streams, the baselines, and the oracle solution. As expected, the label permutation renders collaboration extremely detrimental as the different learning tasks are conflicting. As a result, local learning provides better accuracy than FedAvg. On the other hand, personalization can still leverage data among clusters and provide gains also in this case. Our algorithm matches the oracle performance for a suitable number of personalized streams. This scenario is particularly suitable for hard clustering, which isolates conflicting data distributions. As a result, CFL matches the proposed solution. FedFOMO promptly detects clusters and therefore quickly converges, but it attains lower average accuracy compared to the proposed solution.

The performance reported so far is averaged over users and therefore fails to capture the existence of outliers performing worse than average. In order to assess the fairness of the training procedure, in Table

I we report the worst user performance in the federated system. The proposed approach produces models with the highest worst case in all three scenarios.

(a)
(b)
(c)
Fig. 3: Evolution of the average validation accuracy against time normalized w.r.t. for the three different systems.

Iv-C Communication Efficiency

Personalization comes at the cost of increased communication load in the downlink transmission from the PS to the federated user. In order to compare the algorithm convergence time, we parametrize the distributed system using two parameters. We define by the ratio between model transmission time in uplink (UL) and downlink (DL). Typical values of in wireless communication systems are in the range because of the larger transmitting power of the base station compared to the edge devices. Furthermore, to account for unreliable computing devices, we model the random computing time at each user

by a shifted exponential r.v. with a cumulative distribution function

where representing the minimum possible computing time and being the average additional delay due to random computation impairments. Therefore, for a population of devices, we then have

where is the -th harmonic number. To study the communication efficiency we consider the simulation scenario with the EMNIST dataset with label and covariate shift. In Fig. 3 we report the time evolution of the validation accuracy in 3 different systems. A wireless systems with slow UL and unreliable nodes , a wireless system with fast uplink and reliable nodes , and a wired system (symmetric UL and DL) with reliable nodes , . The increased DL cost is negligible for wireless systems with strongly asymmetric UL/DL rates and in these cases, the proposed approach largely outperforms the baselines. In the case of more balanced UL and DL transmission times and reliable nodes, it becomes instead necessary to properly choose the number of personalized streams in order to render the solution practical. Nonetheless, the proposed approach remains the best even in this case for . Note that FedFOMO incurs a large communication cost as personalized aggregation is performed at the client-side.

V Conclusion

In this work, we presented a novel federated learning algorithm that exploits multiple user-centric aggregation rules to produce personalized models. The aggregation rules are based on user-specific mixture coefficients that can be computed during one communication round prior to federated training. Additionally, in order to limit the communication burden of personalization, we propose a simple strategy to effectively limit the number of personalized streams. We experimentally study the performance of the proposed solution across different tasks. Overall, our solution yields personalized models with higher testing accuracy while at the same time being more communication-efficient compared to the competing baselines.

References

  • [1] M. ,Othmane, N. ,Giovanni, A. Bellet, L. Kameni, and R. Vidal (2021) Federated multi-task learning under a mixture of distributions. International Workshop on Federated Learning for User Privacy and Data Confidentiality in conjunction with ICML 2021 (FL-ICML’21). Cited by: item 1, §I, 1st item.
  • [2] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary (2019) Federated learning with personalization layers. arXiv preprint arXiv:1912.00818. Cited by: §I.
  • [3] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan (2010) A theory of learning from different domains. Machine learning 79 (1), pp. 151–175. Cited by: §II.
  • [4] C. Briggs, Z. Fan, and P. Andras (2020) Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In

    2020 International Joint Conference on Neural Networks (IJCNN)

    ,
    pp. 1–9. Cited by: item 1, §I.
  • [5] S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečnỳ, H. B. McMahan, V. Smith, and A. Talwalkar (2018) Leaf: a benchmark for federated settings. arXiv preprint arXiv:1812.01097. Cited by: §I.
  • [6] G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik (2017)

    EMNIST: extending mnist to handwritten letters

    .
    In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2921–2926. Cited by: §IV-A.
  • [7] Y. Deng, M. M. Kamani, and M. Mahdavi (2020) Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461. Cited by: §I.
  • [8] A. Fallah, A. Mokhtari, and A. Ozdaglar (2020) Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Advances in Neural Information Processing Systems 33, pp. 3557–3568. Cited by: §I.
  • [9] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pp. 1126–1135. Cited by: §I.
  • [10] F. Hanzely and P. Richtárik (2020) Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516. Cited by: §I.
  • [11] Y. Jiang, J. Konečnỳ, K. Rush, and S. Kannan (2019) Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488. Cited by: §I.
  • [12] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. (2019) Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. Cited by: §I.
  • [13] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §IV-A.
  • [14] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §IV-A.
  • [15] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith (2018) Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127. Cited by: §I.
  • [16] Y. Mansour, M. Mohri, and A. Rostamizadeh (2009) Domain adaptation: learning bounds and algorithms. arXiv preprint arXiv:0902.3430. Cited by: §II.
  • [17] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273–1282. Cited by: §I.
  • [18] M. Mestoukirdi, M. Zecchin, D. Gesbert, Li,Qianrui, and N. Gresset (2021) User-centric federated learning: trading off wireless resources for personalization. To be submitted to: IEEE Transactions on Wireless Communications. Cited by: §II, §II, §III-B.
  • [19] M. Reisser, C. Louizos, E. Gavves, and M. Welling (2021) Federated mixture of experts. arXiv preprint arXiv:2107.06724. Cited by: §I.
  • [20] F. Sattler, K. Müller, and W. Samek (2020) Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints. IEEE Transactions on Neural Networks and Learning Systems. Cited by: item 1, §I, §IV-A, TABLE I.
  • [21] J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor (2020) Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint arXiv:2007.07481. Cited by: 1st item.
  • [22] M. Zhang, K. Sapra, S. Fidler, S. Yeung, and J. M. Alvarez (2020) Personalized federated learning with first order model optimization. arXiv preprint arXiv:2012.08565. Cited by: §I, §III, §IV-A, TABLE I.