GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization

by   Yen-Hsiu Chou, et al.

Since data is presented long-tailed in reality, it is challenging for Federated Learning (FL) to train across decentralized clients as practical applications. We present Global-Regularized Personalization (GRP-FED) to tackle the data imbalanced issue by considering a single global model and multiple local models for each client. With adaptive aggregation, the global model treats multiple clients fairly and mitigates the global long-tailed issue. Each local model is learned from the local data and aligns with its distribution for customization. To prevent the local model from just overfitting, GRP-FED applies an adversarial discriminator to regularize between the learned global-local features. Extensive results show that our GRP-FED improves under both global and local scenarios on real-world MIT-BIH and synthesis CIFAR-10 datasets, achieving comparable performance and addressing client imbalance.



page 1

page 2

page 3

page 4


Federated Noisy Client Learning

Federated learning (FL) collaboratively aggregates a shared global model...

Communication-Efficient Online Federated Learning Framework for Nonlinear Regression

Federated learning (FL) literature typically assumes that each client ha...

An Experimental Study of Class Imbalance in Federated Learning

Federated learning is a distributed machine learning paradigm that train...

FRAug: Tackling Federated Learning with Non-IID Features via Representation Augmentation

Federated Learning (FL) is a decentralized learning paradigm in which mu...

Toward Understanding the Influence of Individual Clients in Federated Learning

Federated learning allows mobile clients to jointly train a global model...

Fed-TGAN: Federated Learning Framework for Synthesizing Tabular Data

Generative Adversarial Networks (GANs) are typically trained to synthesi...

FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated Distillation

Federated learning provides a privacy guarantee for generating good deep...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Federated Learning (FL) is a distributed learning algorithm that trains models across multiple decentralized clients and keeps data private simultaneously (McMahan et al., 2017; Konečnỳ et al., 2016). One of the issues in FL is the distinct distributions where decentralized data is diverse due to different properties over each client (Kairouz et al., 2019; Liang et al., 2020; Deng et al., 2020; Li et al., 2019).

Figure 1: The client imbalanced issue in the real-world MIT-BIH dataset. (a) visualizes the client distribution as each client point via t-SNE; (b) plots the class distribution of the global and selected clients; (c) presents the performance (Macro-F1) of FedAvg on the entire test (global) or specific clients (local).

In the real world, data is inevitably long-tailed (Yang and Xu, 2020). Fig. 1 illustrates the data distribution of MIT-BIH (Goldberger et al., 2000), a real-world Electrocardiography (ECG) dataset for medical diagnosis. Each patient, which may have different arrhythmia issues over several ECGs, is viewed as a client under the FL setting. Fig. 1(a) visualizes the client distribution as each client point using t-SNE (Maaten and Hinton, 2008). It shows that the global distribution of clients is non-uniformly, where some clients are scattered and far away. Fig. 1(b) plots the class distribution of data from clients, which shares distinct distributions and provides different class data.

FedAvg (McMahan et al., 2017) is a classic FL algorithm where a single model tries to fit among clients by averaging the parameters from local training. Fig. 1(c) presents the Macro-F1 score of FedAvg for global and client-based (local) testing. Since conducting all clients equally, FedAvg ignores the various data distributions between clients, making the poor performance on the global test. Furthermore, FedAvg is easily dominated by major clients but gives up remaining clients, where the F1 score drops drastically on them. Fig. 1 shows this imbalanced issue that makes applying FL to practical applications challenging.

Even if encouraging worse clients to focus on global fairness (Mohri et al., 2019; Li et al., 2019), the performance gap between global and local tests is still significant (Jiang et al., 2019), which indicates that personalization is crucial in FL. The local training (Fallah et al., 2020; Khodak et al., 2019; Liang et al., 2020; Dinh et al., 2020) adopts personalization by training part of local models only on client data as customization. However, the local models, which directly minimize the local error, suffer from overfitting and lose the discrimination of those minor local classes.

In this paper, we introduce Global-Regularized Personalization (GRP-FED) to address client imbalance in FL. As shown in Fig. 2, GRP-FED contains a single global model and local models for each client to consider global fairness and local personalization. Since each client provides different amounts and aspects of class data, our GRP-FED presents Adaptive Aggregation to adaptively adjust the weight of each client and aggregate as a fairer global model. To do personalization, local models are only trained on the specific data for each client. In case of being customizing but overfitting, we present the Global-Regularized Discriminator (D) to distinguish that an extracted feature is from the global or the local model. By jointly optimizing to fool D, local models learn the specific distribution for each client and the general global feature to avoid overfitting.

We conduct the evaluation on real-world MIT-BIH (Goldberger et al., 2000) and synthesis CIFAR-10 (Krizhevsky and Hinton, 2009) datasets under FL setting. The experimental results show that our GRP-FED can improve both global and local tests. Furthermore, the proposed global-regularized discriminator addresses local overfitting effectively. In summary, our contributions are three-fold:

  • [topsep=0pt, noitemsep, leftmargin=*]

  • We present GRP-FED to simultaneously consider global fairness and local personalization for Federated Learning;

  • The proposed adaptively-aggregated global model and customized local models gain improvement under both global and local scenarios;

  • Extensive ablation studies on both real-world MIT-BIH and synthesis CIFAR-10 show that GRP-FED achieves better performance and deals with client imbalance.

Figure 2: An overview of Global-Regularized Personalization (GRP-FED) for the federated learning (FL). For better global fiarness, we adopt adaptive aggregation to investigate different aspects and proportions of clients. We make each local client optimize only on their client to support personalization. In addition, the proposed global-regularized discriminator helps to prevent from overfitting.

2 Related Work

Federated Learning (FL)  Federated Learning (FL) (McMahan et al., 2017; Konečnỳ et al., 2016), where models are trained across multiple decentralized clients, aims to preserve user privacy (Li et al., 2020; Basu et al., 2019) and lower communication costs (Li et al., 2020; Basu et al., 2019). Similar to imbalanced data distribution (Hanzely and Richtárik, 2020; Khodak et al., 2019; Wang et al., 2021; Duan et al., 2020), we investigate the global model used for all and new data and local models that are customized and support personalization for local clients.

Global Model for FL  In FL, the global model is trained from all clients and fit the overall global distribution. FedAvg (McMahan et al., 2017) is the first to apply local SGD and build a single global model from a subset of clients. Moreover, they improve global fairness by adapting the global model better to each client (Fallah et al., 2020; Khodak et al., 2019) or treating clients with different importance weights (Mohri et al., 2019). Inspired by q-FFL (Li et al., 2019), which utilizes a constant power to tune the amount of fairness, our GRP-FED adaptively adjusts the power of loss to satisfy dynamic fairness during the global training.

Local Model for FL  The performance gap between global and local tests indicates that personalization is crucial in FL (Jiang et al., 2019). Local fine-tuning (Smith et al., 2017; Chen et al., 2020; Khodak et al., 2019; Fallah et al., 2020; Liang et al., 2020) supports personalization by training each local model only on client data. While, pFedMe (Dinh et al., 2020) argues that directly minimizing local error is prone to overfit and adopts Moreau envelopes to help decouple personalization. Different from that, our GRPFED introduces the Global-Regularized Discriminator to regularize the local feature distribution and mitigate the local overfitting issue.

3 Approach

3.1 Overview

Task Definition  Federated Learning (FL) is to learn from independent clients where client contains a local dataset . represents the pair of th data and its label where is the number of data in . We consider FL as classification task where is the class label of . Intuitively, each client captures a different view of the global data distribution . However, since each client provides with distinct distributions in real world, FL easily suffers from the client imbalance under practical applications.

GRP-FED  To address the client imbalanced issue, we present Global-Regularized Personalization (GRP-FED) into FL. An overview of GRP-FED111: global feature extractor, : local feature extractor, : classifier, and : discriminator in client . is illustrated in Fig. 2. For a data point , the feature extractor extracts the lower-dimensional representation of

, and the classifier

performs the output prediction . GRP-FED consists of a fair global model that applies adaptive aggregation to consider different aspects from clients, and local models to customize each client. The proposed adaptive aggregation adjusts the aggregated proportion to ensure the fairness over distinct clients. The local models do personalization, where a global-regularized discriminator prevents it from overfitting when optimizing the client data.

3.2 Global Fairness

Global Training  Global fairness aims at building a global model that can fairly cope with the global distribution over distinct clients. The global model includes the global feature extractor parameterized by and the classifier parameterized by . The global training is to train the global model in each client and then aggregated as a single global model:


where is the extracted global feature.

is calculated by the loss function

at time step and updates to receive . In conventional FL, FedAvg (Smith et al., 2017) considers the global model by averaging all trained global models from clients. However, under the real-world FL setting, client data is collected from different environments, scenarios, or applications. The data distribution from each client presents diversely, which results in poor generalization if treating them equally. To deal with the client imbalanced issue, we propose adaptive aggregation for a fairer aggregated proportion.

1:  Server:
2:  Initialize , , , ,
4:  for  do
5:      randomly select clients
6:     for   do
7:         , , Client(, , )
8:     end for
10:      Adaptive aggregation by adaptive
14:  end for
16:  Client(, , ):
18:  for  do
19:      batches ()
20:     for batch (, )  do
21:          Run with the global model
27:          Run with the local model
30:          Update also with
33:          Update with as true and as false
37:     end for
38:  end for
39:  return , , to server
Algorithm 1 GRP-FED, : learning rate, : loss function

Adaptive Aggregation  q-FFL (Li et al., 2019) adopts a constant power that tunes the amount of fairness. However, the training process for FL can be dynamic over distinct clients, where a fixed power of loss is difficult to satisfy the expected fairness under all situations. To overcome this issue, we present adaptive aggregation and consider a dynamic

to adaptively adjusts for better fairness. We treat fairness as the standard deviation (

) of the global training loss in all clients. If is high, the global training loss is quite different and the global model may suffer from client imbalance. Therefore, we adjust the loss of power :


Otherwise, if the fairness becomes relatively fairer, we should decrease for more robust training. Finally, we acquire the new global model by aggregating all trained global models, weighted by the global training loss and the adaptive power :


In this way, we can adaptively adjust to satisfy the dynamic fairness during the global training by considering the standard deviation of the global training loss over all clients.

3.3 Local Personalization

Local Training  Apart from a single global model, since each client is collected from different sources and under various usages, local models that support personalization are also crucial. The local training is to train each local model only with the data in client for personalization:


where is the extracted personalized feature. Similar to the global training, is updated by from . Thereby, we personalize the local feature in the specific client. Note that we fix the classifier with during the local training for a personalized local feature distribution.

Global-Regularized Discriminator ( After the local training, we can have the personalized feature . However, since under FL, the client data distribution is far from global , the learned may be just overfitting on that client but suffers from poor generalization for the global scenario. To mitigate this overfitting issue, we introduce Global-Regularized Discriminator (). Each client maintains its own parameterized by , which serves as a binary classifier to distinguish an extracted feature is from the global or the local feature extractor. We make the global feature by as the true case and the local feature by as the false case, and train as following:

where and are fixed, and only is updated during the training. With the help of , the local feature extractor can be regularized to prevent overfitting:


This time, should be freeze and is optimized to fool the discriminator . By updating the local training along with the global-regularized discriminator, the local feature extractor learns to personalize and imitate the global feature distribution, which can avoid client overfitting.

Method Global Test Local Test Personalization Generalization Global Test Local Test Personalization Generalization
Local 0.0750.009 0.1400.001 0.9330.005 0.0760.001 0.2140.022 0.2900.005 0.3960.009 0.2350.003
FedAvg 0.3340.046 0.4070.044 0.6840.011 0.3340.046 0.4620.006 0.4820.004 0.5180.001 0.4620.006
AFL 0.5060.018 0.5030.023 0.6060.042 0.5060.018 0.4950.004 0.4960.006 0.5100.009 0.4950.004
q-FFL 0.5510.034 0.5340.006 0.6020.033 0.5510.001 0.5630.003 0.5300.007 0.5100.009 0.5630.003
per-FedAvg 0.3780.030 0.4240.022 0.7990.004 0.3100.024 0.5250.021 0.4900.014 0.5500.012 0.4530.017
pFedMe 0.2900.011 0.2880.012 0.8500.006 0.1780.001 0.4060.010 0.4140.007 0.5030.014 0.3560.010
LG-FedAvg 0.3430.034 0.2860.010 0.9640.007 0.1690.006 0.5030.025 0.4990.015 0.6760.017 0.4030.014
GRP-FED 0.5690.004 0.5530.011 0.8640.022 0.4240.007 0.5780.001 0.5520.010 0.6110.010 0.5160.007
Table 1: The quantitative results of our GRP-FED and baselines in the global test () and the local test (), including the personalization test () and the generalization test (), on both real-world MIT-BIH and synthesis CIFAR-10 datasets.

3.4 Learning of GRP-FED and Inference

The learning process of GRP-FED is presented in Algo. 1. For each round , to make the federated learning stable, we follow (Smith et al., 2017) that randomly selects clients as for training. At first, the server copies global feature extractor and classifier to the clients for independent federated training. Both and are trained through data from all clients during the global training. For the local training, the local feature extractor is only trained by the client data and updated from the to do personalization on client . Also, is jointly trained from to prevent overfitting. The global-regularized discriminator () then updates from by discriminating an feature is extracted from the local or global model, where is the weight of loss between and .

After returning all trained , , and global training loss from each client , we aggregate and as the new global model over the aggregated weight . is updated from the adaptive power and the global training loss to force investigating different proportions of clients. In total, the entire training loss of GRP-FED is:

Inference  During inference, given an example , we consider two testing types for both local and global scenario:

  • [topsep=0pt, noitemsep, leftmargin=*]

  • local test: if belongs to client , we apply the local model () for the best personalization;

  • global test: otherwise, is fed to the global model () as an unknown example from the global distribution.

We also conduct these two types of testing in our experiments to evaluate both global fairness (global test) and local personalization(local test) of our proposed GRP-FED.

4 Experiments

4.1 Experimental Setting

Dataset  We evaluate our GRP-FED on two federated classification datasets, real-world MIT-BIH (Goldberger et al., 2000), and synthesis CIFAR-10 (Krizhevsky and Hinton, 2009). MIT-BIH is an Electrocardiography (ECG) dataset for medical diagnosis, where each fragment belongs to one of 12 arrhythmia classes. There are 46 patients in MIT-BIH, containing different numbers of ECG fragments and presenting various class distributions. We treat each patient as a single client that supports both personalized evaluation for a specific patient and global evaluation over all clients.

We distribute the entire CIFAR-10 dataset to 50 clients as the FL setting. To imitate different client distributions, each client contains different numbers of total data (with decreasing over clients). The class distribution is randomly sampled (with decreasing over the number of each class). is that follows the distribution of MIT-BIH.

Evaluation Metrics  Since the class distribution over real-world data is non-uniform, the classic accuracy (%) cannot reflect the proper performance of the prediction and may ignore those minor classes with less examples. We adopt macro-F1, the mean F1-score of each class, to treat them as the same importance. This evaluation is more suitable under data imbalance. For instance, we care more about those examples with different arrhythmia issues in MIT-BIH.

Testing Scenario We conduct global usage and local personalization under two testing scenarios:

  • [topsep=0pt, noitemsep, leftmargin=*]

  • Global Test (): the global model predicts the entire testing set to evaluate the fairness over global distribution;

  • Local test (): we consider both aspects of local personalization () and local generalization () to avoid overfitting. is the mean performance of local models under their clients; calculates from the mean macro-F1 score of each local model in the global test. Concerning both personalization and overfitting, the overall performance of the local test () is computed as:


Baselines  We compare against various FL methods:

  • [topsep=0pt, noitemsep, leftmargin=*]

  • Global-only: FedAvg (Smith et al., 2017), q-FFL (Li et al., 2019), and AFL (Mohri et al., 2019);

  • Local-only: Local and LG-FedAvg (Liang et al., 2020);

  • Global-Local: pFedMe (Dinh et al., 2020) and per-FedAvg (Fallah et al., 2020).

For global-only methods, the global model evaluates under each client to perform the local test. Following LG-FedAvg (Liang et al., 2020), we ensemble results from all local models as the global output for local-only algorithms. With GRP-FED or global-local frameworks, we apply the global model for the global test and local models for the local test.

Implementation Detail  As the classification task, we apply the cross-entropy loss for the loss function . We adopt 5-layer 1D ResNet (He et al., 2016) to process ECG under MIT-BIH and ResNet-30 under CIFAR10 as the feature extractor . The classifier is a 2-layer fully-connected (FC) that projects the feature into class prediction. The global-regularized discriminator

is also 2-layer FC but projects to binary indication for the true/false discrimination. We set the local epoch

and the batch size 64. SGD optimizes all parameters with a learning rate () 5e-3, adjusting rate ( in Eq. 2) 0.5, momentum 0.9. The initial loss power is 10, which is the same as q-FFL.

4.2 Quantitative Results

Global Test  Table 1 shows the results of GRP-FED and baselines on both real-world MIT-BIH and synthesis CIFAR-10 datasets. The global test () is to evaluate the fairness of the global model over the entire testing set. It shows that our GRP-FED achieves the highest macro-F1 score on both MIT-BIH () and CIFAR-10 (). Since the proposed adaptive aggregation adjusts the power of loss according to the dynamic fairness, it gains a significant improvement under and achieves better global fairness.

Local Test  Local personalization is essential when regarding a specific client under the FL setting. At first, LG-FedAvg (Liang et al., 2020) performs the best in the local personalization test (96.4% ) on MIT-BIH. However, since the local features are merely learned from the client, they are easily overfitting and result in poor generalization in the local generalization test (16.9% ). q-FFL has the highest 55.1% but presents lower 60.2% without personalization. With the global-regularized discriminator, local models in our GRP-FED can extract personalized features but avoid overfitting. We surpass all baselines in the overall local test (55.3% ) with a comparable 86.4% and 42.4% . A similar trend can be found on CIFAR-10. Our GRP-FED achieves the highest 55.2% and strikes the most appropriate balance between personalization (61.1% ) and generalization (51.6% ).

4.3 Ablation Study

Is the Global Model Actually Fair?  To ensure the global model is actually fair, we plot the learning curve of the mean and max global training loss in Fig. 3. Basically, all methods have a relatively low mean training loss during the global training. We can investigate the global fairness through the max global training loss. FedAvg treats each client equally and sacrifices those minor classes, resulting in a high max global training loss in the end. The adversarial aggregation in AFL is not fair enough and still remains high max training loss. q-FFL adopts a constnat loss power to tune the amount of fairness. While, a fixed power of loss cannot satisfy all fairness situations and instead increases the max training loss at last. Our adaptive aggregation considers a dynamic that can adaptively adjust, which keeps decreasing the max training loss as the fairer global model.

Figure 3: Learning curve of the mean/max global training loss.

How affects the local models?  We adopt to control the weight of loss between the personalization by the local training and the generalization by the global-regularized discriminator. Fig. 4 illustrates the effect of on MIT-BIH during the local personalization. There is a trade-off between local personalization () and local generalization (). When gets larger, we treat the personalization as more important and improve but hurt . On the other hand, increases when decreases if we consider the local feature should be more generalized with lower . leads to the best local test () in our GRP-FED.

Figure 4: The trade-off between the personalization () and generalization () in local test () under different loss weight .

Case Study  Fig. 5 visualizes the performance of two clients on MIT-BIH. Since the upper client is similar to the global class distribution, all methods perform well under both the global test () and local personalization test (). The trained local model on this client also performs well in the overall local test (). However, in the lower client, which presents a distinct class distribution, LG-FedAvg is entirely overfitting and results in high but terrible . By contrast, our GRP-FED still outperforms the remaining baselines in and achieves the highest with the help of the global-regularized discriminator. Moreover, GRP-FED considers the dynamic fairness from different client distributions and leads to the best as a fairer global model.

Method Global Test Local Test Personalization FedAvg 0.996 0.498 0.996 q-FFL 0.718 0.464 0.578 per-FedAvg 0.990 0.519 0.999 pFedMe 0.992 0.379 0.998 LG-FedAvg 0.999 0.380 1.000 GRP-FED 0.997 0.657 0.998 Method Global Test Local Test Personalization FedAvg 0.219 0.262 0.219 q-FFL 0.328 0.325 0.328 per-FedAvg 0.291 0.351 0.435 pFedMe 0.152 0.274 0.492 LG-FedAvg 0.269 0.178 0.975 GRP-FED 0.454 0.491 0.493
Figure 5: The performance of global test and local test in two clients (upper: similar to, lower: far from the global distribution).

5 Conclusion

In this paper, we introduce Global-Regularized Personalization (GRP-FED) to address the client imbalance issue under federated learning (FL). GRP-FED consists of a global model and local models for each client. The global model considers the dynamic fairness and investigates different proportions of clients with the adaptive aggregation. The local models do personalization by the local training, and the proposed global-regularized discriminator can prevent the overfitting issue. Extensive results show that our GRP-FED outperforms baselines under both global and local scenarios on real-world MIT-BIH and synthesis CIFAR-10 datasets.


  • D. Basu, D. Data, C. Karakus, and S. N. Diggavi (2019) Qsparse-local-sgd: distributed SGD with quantization, sparsification and local computations. In Advances in Neural Information Processing Systems, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett (Eds.), Cited by: §2.
  • Y. Chen, Y. Ning, Z. Chai, and H. Rangwala (2020) Federated multi-task learning with hierarchical attention for sensor data analytics. In

    2020 International Joint Conference on Neural Networks (IJCNN)

    External Links: Document Cited by: §2.
  • Y. Deng, M. M. Kamani, and M. Mahdavi (2020) Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461. Cited by: §1.
  • C. T. Dinh, N. H. Tran, and T. D. Nguyen (2020) Personalized federated learning with moreau envelopes. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Cited by: §1, §2, 3rd item.
  • M. Duan, D. Liu, X. Chen, R. Liu, Y. Tan, and L. Liang (2020) Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Transactions on Parallel and Distributed Systems 32 (1), pp. 59–71. Cited by: §2.
  • A. Fallah, A. Mokhtari, and A. Ozdaglar (2020) Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Advances in Neural Information Processing Systems. Cited by: §1, §2, §2, 3rd item.
  • A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. Peng, and H. E. Stanley (2000) PhysioBank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation 101 (23). Cited by: §1, §1, §4.1.
  • F. Hanzely and P. Richtárik (2020) Federated learning of a mixture of global and local models. CoRR abs/2002.05516. External Links: 2002.05516 Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. CVPR. Cited by: §4.1.
  • Y. Jiang, J. Konecný, K. Rush, and S. Kannan (2019) Improving federated learning personalization via model agnostic meta learning. CoRR abs/1909.12488. External Links: 1909.12488 Cited by: §1, §2.
  • P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. (2019) Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. Cited by: §1.
  • M. Khodak, M. Balcan, and A. S. Talwalkar (2019) Adaptive gradient-based meta-learning methods. In Advances in Neural Information Processing Systems, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett (Eds.), Cited by: §1, §2, §2, §2.
  • J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon (2016) Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. Cited by: §1, §2.
  • A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto. Cited by: §1, §4.1.
  • T. Li, M. Sanjabi, A. Beirami, and V. Smith (2019) Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497. Cited by: §1, §1, §2, §3.2, 1st item.
  • Z. Li, D. Kovalev, X. Qian, and P. Richtárik (2020) Acceleration for compressed gradient descent in distributed and federated optimization. In

    Proceedings of the 37th International Conference on Machine Learning, ICML

    Cited by: §2.
  • P. P. Liang, T. Liu, L. Ziyin, R. Salakhutdinov, and L. Morency (2020) Think locally, act globally: federated learning with local and global representations. arXiv preprint arXiv:2001.01523. Cited by: §1, §1, §2, 2nd item, §4.1, §4.2.
  • L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov). Cited by: §1.
  • B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. In

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS

    , A. Singh and X. (. Zhu (Eds.),
    Cited by: §1, §1, §2, §2.
  • M. Mohri, G. Sivek, and A. T. Suresh (2019) Agnostic federated learning. In Proceedings of the 36th International Conference on Machine Learning, ICML, K. Chaudhuri and R. Salakhutdinov (Eds.), Cited by: §1, §2, 1st item.
  • V. Smith, C. Chiang, M. Sanjabi, and A. S. Talwalkar (2017) Federated multi-task learning. In Advances in neural information processing systems, Cited by: §2, §3.2, §3.4, 1st item.
  • L. Wang, S. Xu, X. Wang, and Q. Zhu (2021) Addressing class imbalance in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10165–10173. Cited by: §2.
  • Y. Yang and Z. Xu (2020) Rethinking the value of labels for improving class-imbalanced learning. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Cited by: §1.