Federated Learning (FL), popularized by Google [8, 17], is gaining momentum. FL distributes the training process of a machine learning (ML) task across individual clients such that each client trains a local ML model for the task on its private dataset. A central aggregator combines these local models, through heuristics, to derive a generalizable global model 
. This distribution of the training process has several advantages, including but not limited to: (i) minimizing data collection, (ii) reduction in overall training time and power consumption, and (iii) compliance for devices with lower computation capabilities. These advantages enable FL to facilitate various smartphone features such as automatic text completion, voice recognition, face recognition, etc. Recently, Apple Inc. has also improved its popular ‘Face ID’ feature through FL. Motivated by these applications, this paper focuses on different classification tasks based on attractiveness, gender, race in an FL setting with face images as the input data.
It is natural to assume that FL will soon be the popular choice for automation in various sectors. Currently, ML models find use in job recruitment , recidivism prediction , recommender systems , among others. Unfortunately, these models suffer from biased predictions. E.g., the authors in  highlight that an ML model tasked with ranking job applications tends to rank less qualified males higher than more qualified females. These biased, or unfair, predictions are undesirable and may even be catastrophic in some applications. The unfairness of an ML model is often correlated to the inherent bias in the training data. This bias corresponds to the lack of data samples belonging to particular demographic groups such as gender, ethnicity, age, etc. We refer to such a group as the sensitive attribute. What is more, training the models for competitive accuracies often aggravates the unfairness.
To quantitatively assert an ML model as fair
, researchers introduce notions for group fairness. These include, Equalised Odds (EO), Equality of opportunity (EOpp) , and Accuracy Parity (AP) 
. In the context of classifying ‘attractiveness’ (or gender) of a face image with gender (or race) as the sensitive attribute, EO states that the probability with which a model predicts an attractive face to not be attractive must be independent of gender. EOpp ensures that the probability of predicting an attractive face as unattractive is the same across genders. Lastly, AP states that the overall classification error must be equal across genders.
In the literature, several approaches exist which satisfy these notions for non-FL – centralized ML setting [15, 23, 25] and [30, 11, 13, 4, 19] for FL setting. These methods work towards reducing the fairness loss over iterations for a particular notion. A critical assumption for these approaches is that the training data must consist of samples for all the available sensitive attributes. If not, the training loss will not be bounded.
While the assumption is reasonable for classical ML models, the client’s data is often limited in an FL setting. It is highly likely that a client only has data samples belonging to a particular demographic group. E.g., a mobile application used in a particular geographical area will receive data representing that area. Such heterogeneous data may further amplify bias since it may consist of samples for a particular race (sensitive attribute). We believe that such data heterogeneity w.r.t. the sensitive attribute is a significant challenge when striving towards fairness-aware FL – as such, achieving fair classification in an FL setting with heterogeneous data forms the basis of this work.
Our Approach. We remark that the existing approaches cannot ensure fairness with heterogeneous data in an FL setting. This paper argues that fairness may be ensured through an appropriate aggregation heuristic for such an extreme case. More concretely, we propose four novel heuristics to control the fairness and accuracy trade-off. Our heuristics require that the aggregator has access to a small fraction of data, referred to as the validation set. This requirement is reasonable as typically the aggregator has limited starter data [5, 28]. With this, we propose the following heuristics. For these, we derive the accuracy values and fairness loss for a given fairness notion over the validation set.
FairBest. Aggregator sends the model which provides the least fairness loss for a given fairness notion.
FairAvg. Aggregator sends the weighted average of the first models, sorted in increasing order based on the fairness loss.
FairAccRatio. Aggregator sends the model with the highest ratio of accuracy with fairness loss.
FairAccDiff. Aggregator sends the model with the highest value for the weighted difference between accuracy and fairness.
Figure 1 provides an overview of our approach. In summary, the following are our contributions.
To best of our knowledge, we are first to observe that adopting existing approaches for fairness-aware FL is not feasible under data heterogeneity. This is because the client specific fairness losses will become unbounded (Section 4).
We argue that by altering the aggregation heuristics may ensure fairness while simultaneously maximizing the accuracy of the model in FL setting. We provide four novel heuristics for the same (Section 5).
We empirically validate the performance of our proposed heuristics, in terms of accuracy and fairness notions, EO, EOpp and AP (Section 6). More concretely, we consider the following classification tasks:
CelebA : We consider the task of predicting whether the input face image is attractive or not. We let gender be the sensitive attribute.
2 Related Work
Federated Learning. Federated Learning has gained much attention in recent times. Google first introduced it for smartphone applications like automatic word prediction . Many works in ML started to apply FL to resolve the issues such as computational efficiency, collection of data, and data privacy [30, 4, 10, 33]. FL has a wide range of applications in various domains such as Telecommunications , Mobile applications , Automotives , IoT , etc. In general, for most of the FL settings, FedAvg  has become state-of-art. However, FedAvg does not tackle the statistical heterogeneity which is inherent in FL and is shown to diverge empirically. In , authors propose an FL optimization framework to handle statistical data heterogeneity. However, fairness guarantees in FL setting under data heterogeneity is still not explored.
Fairness in ML. Towards achieving fairness-aware learning, there are two primary directions to achieve fairness. The first type of work includes (i) Pre-processing the training data to remove sensitive information about the sensitive attribute or (ii) Post-processing the classifier to achieve fair prediction. For instance, the work in  studies the accuracy disparity of the prediction on a sensitive class by constructing a novel face dataset removing racial bias. The authors in 
apply generative adversarial neural networks to generate fair data from the original training data and uses the generated data to train the model. The second type of work includes incorporating fairness constraints into the classification model during optimization[25, 20, 29]. The popular among these is Lagrangian Multiplier Method (LMM) [15, 25]. However, in this work, we observe that LMM cannot be adopted in FL settings with data heterogeneity. Towards this, we propose different aggregation heuristics to tackle fairness issues in FL under data heterogeneity.
We consider binary classification problem, with the universal instance space which is non-sensitive information; output label space and sensitive attribute for some finite . The sensitive attribute represents demographic groups like ethnicity, gender or age. Each attribute has multiple finite categories. E.g., gender may comprise male and female, and ethnicity can have multiple categories like White, Black, Asian etc. We aim to learn a neural network based classifier (or model) , where represents the model parameters. The parameters are trained to learn an accurate and fair model. In a non-FL setting, there is a global training set sampled from and network is trained on this set for a well-defined loss till convergence. To avoid collecting and training over large number of samples, researchers propose FL for distributed data.
3.1 Federated Learning (FL) Setting
Unlike classical ML, where a centralized aggregator collects all the local data to train, in FL the data is distributed across multiple clients. The data with these clients is private and inaccessible by others including the aggregator. More formally, FL includes the following two major parties:
Set of clients , wherein each client owns a finite and limited number of samples . Let be the number of samples. Each client trains an individual model on its private data .
To ensure good performance on the universal data, FL proposes the following iterative process where aggregator-client interactions are repeated for multiple rounds. It involves the following three stages:
Initialization: Aggregator communicates the initial (often random) model parameters
to the clients i.e., epoch.
Local training: Each client initialize their local models with , and train the model on . At the end of each epoch , the client obtains .
Aggregation: All the clients communicate their locally updated model parameters to the aggregator, at every fixed number of epochs. The aggregator combines each model to obtain a global model using certain heuristics . The most common heuristic applied is weighted average defined as follows,
After aggregation, the model is broadcasted back to the clients. The clients initialize their local model with these parameters and train further. This back and forth process is repeated multiple times till convergence.
3.2 Fairness Notions
In this section, we formally define the fairness notions which are usually defined in terms of the False Positive Rate (FPR) and False Negative Rate (FNR) of a classifer,
Equality of Opportunity (EOpp) : A classifier satisfies EOpp for a distribution over if,
Informally, the FNR is same across all categories of the sensitive attribute.
Equalized Odds (EO) : A classifier satisfies EO for a distribution over if,
EO ensures the FPR and FNR is equal across all categories.
Accuracy Parity (AP) : A classifier satisfies AP for a distribution over if,
AP ensures overall error rate i.e., FPR+FNR is equal across all categories.
In general ensuring equal error rates across categories is impossible . Hence, we aim to minimize the difference in these rates while maintaining the highest possible accuracy. For this, we represent the violation in EO by classifier on dataset as defined as the maximum of disparity in FPR and FNR across categories, i.e., . Here, represents FPR only within category and determined over the entire data; likewise for and . The violation in EOpp and AP are straightforward, i.e., one can similarly derive and , respectively.
3.3 Lagrangian Multiplier Method
. The method is proposed for the non-FL setting. LMM proposes a loss function which combines both fairness and accuracy. It minimizes cross entropy loss(Equation 6) and also the violation in fairness constraint . Formally, the loss structure for a classifier is given by,
Here, , is the Lagrangian multiplier and the is soft version of as defined in . During training, the parameters of and is learnt.
4 Problem Framework: FL
The overall goal is to train a classifier that minimizes the violation of fairness while maximizing accuracy. Maximizing accuracy is equivalent to minimizing cross entropy loss formally given by,
The overall optimization is given by the following and is for a small which denotes the violation in fairness.
Informally, we maximize accuracy while minimizing the fairness loss for the total samples available i.e., union over all the datasets with each individual client. The distribution of data among the clients effects the performance of any approach used to solve the above optimization. We talk about the two extreme cases of when the data is well-balanced vs heterogeneous data.
4.1 Balanced Data (DB)
In most FL settings, it is common to assume that each client has data representing all the categories of the sensitive attribute. We distinguish this scenario by referring to it as DB. That is, a scenario in which all the clients have approximately equal number of samples for Whites, Blacks and Asian when race is the sensitive attribute.
Towards solving our overall optimization with DB, we can train each client’s model with LMM and then use weighted aggregation. With DB, it is possible to compute the loss given by Equation 5, where the second term requires the computation of . The loss function used to train model by client is given by,
The overall parameters are given after aggregation using Equation 1, which is weighted average. We refer to the final classifier obtain after multiple interaction with client and aggregator as FedAvg-. Note that this approach works only with DB.
4.2 Data Heterogeneity (DH)
We believe assumption of DB is not practical and in reality each client may only possess samples of a particular category (for instance, depending on its geographical location). We refer to this scenario as DH, where each client , owns samples of only a single category, i.e, , where may represent either White or Black or Asian race.
With DH, for the client , the cannot be computed since the error rates for the categories not present in the data will be . Hence the LMM loss cannot be computed at the client level. We propose the following pipeline, to overcome this issue,
Client Loss. Each client trains the model only for maximizing accuracy,
Aggregation. Training only for accuracy compromises unfairness . Hence, simple weighted average does not ensure reduction in fairness violations. Towards this we propose certain aggregation heuristics using the validation set owned by the aggregator. We assume that is too small to be used for training but it is balanced data.
We discuss the aggregation heuristics in the next section.
5 Heuristics for Fair FL
Since sophisticated techniques which require the computation of fairness loss will not work for FL with data heterogeneity at the client level, we look at constructing different aggregation heuristics for achieving fairness.
The aggregator has access to a small validation set . Firstly, the aggregator evaluates the accuracy and fairness of individual client models over . Then, based on the accuracy and fairness values, it uses certain heuristics to derive the global model. The different types of heuristics for the aggregation are defined next.
FairBest. In this, aggregator selects a specific model from the set of local models, which provide better fairness on . That is, the global aggregation parameter at an epoch , , is defined as,
Here, is the fairness loss for client ’s model on at an epoch
FairAvg. For this approach, aggregator sorts the local models in increasing order of the fairness loss and then takes the weighted average of the first local models. Let denote the set comprising the models. We have,
FairAccRatio. Aggregator picks the model parameters from the local model which gives the best ratio of accuracy with fairness loss on . That is,
Here, is the accuracy observed in client ’s model over at an epoch
FairAccDiff. In this, aggregator picks the model parameters from the local model which gives the best weighted difference, for a weight , of the accuracy and the fairness loss on . That is,
In this section, we empirically validate our proposed heuristics. Firstly, for an appropriate comparison, we define four relevant baselines. We then explain our implementation in terms of our network architecture and dataset details. Further, we describe our training approach for the FL settings, followed by our results. We conclude the section with a comprehensive discussion of the results obtained as compared to the baselines111We provide our complete code-base as supplementary material along with this submission..
To validate our proposed heuristics, we evaluate their performance with the standard settings. The evaluation is in terms of accuracy and violation of fairness notions mentioned in Section 3.2. We create the following baselines.
NonFed. This is the classical ML setting. The central aggregator has access to all the training data. The aggregator trains the model for maximizing accuracy using the loss function . Note that the training is independent of any fairness constraint.
FedAvg . This corresponds to the state-of-the-art FL setting wherein the aggregator uses a weighted average to combine all local models. Critically, in this baseline, the training data is balanced, i.e., each client has data samples of all demographic groups (in a similar ratio). For the training, we again use the loss function . Finally, global model parameters are aggregated as (Eq. 1). Similar to NonFed, this baseline is also independent of any fairness constraint.
FedAvg-. In contrast to FedAvg, in this baseline, we train the local models for both (maximizing) accuracy and (minimizing) fairness. To incorporate the fairness constraint, we adopt the state-of-the-art LMM approach. More formally, we use the loss function (Eq. 5), for training . Again, is used for global model aggregation.
FedAvg-DH. This corresponds to our novel FL setting with data heterogeneity. That is, each client has data associated with a single demographic group. As sophisticated approaches which require the computation of fairness losses (e.g., LMM) will not work in this setting, we train the individual s for only accuracy using . We then aggregate them using .
6.2 Implementation and Setup Details
We now provide the datasets and architecture details.
In CelebA, there are 180K train samples where we use face images of input size to predict ‘attraction’ attribute, while ‘gender’ is the sensitive attribute. Here, gender is either male or female, with the dataset containing 42% of male and 58% of female samples.
In UTK, there are 20K samples. Here, we use face images of input size to predict the attribute ‘gender’ while ‘ethnicity’ is the sensitive attribute. Ethnicity comprises the following five classes: namely White (42%), Black (19%), Asian (15%), Indian (17%), and others (7%).
In FairFace, there are 85k train samples where face images of input size are used to predict ‘gender’ while the attribute ‘race’ is sensitive. FairFace has seven different races: East Asian (42%), White (19%), Latino Hispanic (15%), Southeast Asian (17%), Black (7%), Indian (17%), and Middle Eastern (7%).
6.3 Training Details
For the FL setting, for each baseline and heuristics, we consider 50 clients. We randomly distribute the training data such that each client has an equal number of data samples. Each client’s local model is trained for 5-10 epochs on the private data before every aggregation. The global model aggregation is performed periodically till the training gets stable. The training details specific to each dataset follow next.
CelebA. For CelebA, we consider ‘gender’ as the sensitive attribute with ‘attractive’ as the label to be predicted. To ensure data heterogeneity, we randomly distribute the training data such that among the 50 clients, 21 have access only to ‘male’ data samples. In contrast, the remaining 29 have access only to ‘female’ data samples. This distribution among the clients aims to mimic the original 42%-58% gender distribution.
Each client also has 3K training samples. We run the training of each local model for 75 epochs, and the aggregation is performed 15 times. Figure 2 shows the validation loss which is saturated after 70 epochs for all the models.
UTK. For UTK, we use ‘ethnicity’ as the sensitive attribute, with ‘gender’ as the predicting label. We consider five different groups for ethnicity. We then distribute the data such that 42% of clients have access only to White, 18% to Black, 14% to Asian, 16% to Indian, and 10% to others. This configuration ensures data heterogeneity and also mimics the original distribution. Each client has 500 training samples. We train the local models for five epochs before every aggregation and perform aggregation 20 times. So, overall, each client trains its local model for 100 epochs. From Figure 3, the loss becomes stable around 80 epochs.
FairFace. In FairFace, similar to UTK, we consider ‘ethnicity’ as the sensitive attribute and ‘gender’ as the predicting label. We also have seven different groups for ethnicity. Likewise, we distribute the data among the clients such that each client has access to only a single group. The training data for each client comprises 7% of a single group, i.e., each client has 1K training samples.
We train the local models for 120 epochs, such that the aggregation is performed after every eight epochs. From Figure 4, we see that the loss becomes stable around 100 epochs.
6.3.1 Validation Loss over Epochs
Given that our overall optimization, as discussed in Section 4, is to ensure that the final aggregated model has the least classification loss and fairness violation. We observe the classification loss and fairness violation of the aggregated models across epochs. We observe that these reduce as training progresses in Figures 2, 3, and 4.
Baselines. For each of the baselines except NonFed, we plot their losses calculated on the validation set after every aggregation. The loss for FedAvg and FedAvg-DH is , for FedAvg- it is . Since the baseline NonFed is for non-federated setting, we simply plot the validation loss of the single model.
Note that, for FedAvg- includes both classification and fairness loss, the values are higher compared to rest as observed in Figures 2, 3, 4 (left) for CelebA, UTK and FairFace respectively. We also observe that NonFed and FedAvg has clear convergence as training proceeds and final loss obtained is significantly less, since data is balanced. For FedAvg-DH, with heterogeneous data, we observe the the validation loss is less in initial epochs it does not reduce much with training. These results comply with the final accuracies as given in Tables 1, 2 and 3.
Heuristics. For validating each of the heuristics, we plot the losses as calculated on the validation set . We show the results for loss . The results for fairness losses, , and will be provided in the supplement.
We observe that, the for all the heuristics reduces with training as shown in Figures 2, 3, and 4 (right) for CelebA, UTK and FairFace respectively. Upon considering only accuracy, we find that there is no dominant heuristic for all three datasets. For CelebA and FairFace FairAvg performs best while for UTK FairBest has best accuracy.
6.4 Results and High-level Trends
The baselines NonFed, FedAvg guarantee the best accuracy but, in turn, suffer from greater fairness loss.
For all the three datasets, the baseline FedAvg- provides better fairness guarantees than other baseline models.
Our novel heuristic, FairBest, guarantees good fairness guarantees but at the cost of accuracy.
The heuristics FairAvg, FairAccDiff
show a much more desirable trade-off between accuracy and fairness. One can further optimize the hyperparametersand to achieve a desirable trade-off between fairness and accuracy.
We observe that FairAccRatio ensures better or at-worst similar fairness than FedAvg-, which was trained on balanced data for fairness using the state-of-the-art LMM approach.
More significantly, FairAccRatio also has accuracies comparable to FedAvg-DH, which is the state-of-the-art approach for FL but trained on heterogeneous data for an appropriate comparison.
6.5 Fairness Improvements
CelebA. Compared to baseline models FedAvg and FedAvg-DH, our novel heuristics FairBest, FairAvg, FairAccRatio and FairAccDiff, quantitatively improve the fairness guarantees. Tables 4-5 give the improvements observed in different fairness notions (i.e., Fairness loss in Baseline model - Fairness loss in the respective heuristic).
UTK. With UTK, our heuristics provide significantly better fairness even when compared to FedAvg- which is trained for both accuracy and fairness using LMM. Tables 6-8 give the improvements observed in fairness notions compared to baseline models FedAvg-, FedAvg and FedAvg-DH.
Remark. From the above, one can observe that our proposed aggregation heuristics indeed remarkably improve the fairness of models in FL with data heterogeneity, when compared to FedAvg, FedAvg-DH. This highlights the significance of our novel heuristics. The comparable improvement, in percentage, over state-of-the-art approaches such as LMM, provides further validation.
6.6 Results: Discussion
In this subsection, with Figure 5, we provide an overview of the performances of our novel heuristics, simultaneously over all the three datasets. We also provide a detailed comparison of the heuristics across datasets.
Comparision: FairBest and FairAvg. While FairBest achieves good fairness consistently across datasets, it suffers from poor accuracy when compared to FedAvg-DH. In constrast, FairAvg (with ) provides accuracies which are similar to FedAvg-DH, but for a slightly less improvement in fairness guarantee. One may further choose an appropriate for a desirabel trade-off between accuracy and fairness.
Comparision: FairAvg and FairAvgDiff. Similar to FairAvg, FairAccDiff (with ) also achieves decent accuracies but for a marginal improvement in fairness guarantees. However, tweaking the parameter may further allow for a better trade-off of fairness with the accuracy.
Comparision: FairAvg and FairAvgRatio. From Figure 5, we observe that FairAvg (with ) and FairAvgRatio have similar accuracies for all three datasets. What is more, their accuracies are comparable to the state-of-the-art model FedAvg-DH in FL under DH. We also know that FedAvg- is the baseline model for fairness for standard FL setting with balanced data. However, as stated above, FairAccRatio outperforms FedAvg- guaranteeing the best fairness, on the same dataset. Thus, we believe that our heuristics are significant. A user may choose an appropriate heuristic for its desired accuracy and fairness trade-off.
In this paper, we focussed on the fair classification problem in Federated Learning under data heterogeneity. Firstly, we observed that existing approaches for fairness in FL are not feasible under data heterogeneity. Towards this, we proposed alternative aggregation heuristics that ensure fairness while simultaneously maximizing the model’s accuracy. We proposed four aggregation heuristics based on the fairness and accuracy assured by the local client models. Further, we have shown that these heuristics perform better than the standard baseline methods by empirically evaluating over visions datasets, CelebA, UTK, FairFace.
-  (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big data 5 2, pp. 153–163. Cited by: item 2.
-  (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big data 5 (2), pp. 153–163. Cited by: §1, §3.2, §3.2.
-  (2018) Talent search and recommendation systems at linkedin: practical challenges and lessons learned. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 1353–1354. Cited by: §1.
FairFed: cross-device fair federated learning.
2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pp. 1–7. External Links: Cited by: §1, §2.
-  (2018) Federated learning for mobile keyboard prediction. External Links: Cited by: §1, §2, 2nd item.
Equality of opportunity in supervised learning. Advances in neural information processing systems 29, pp. 3315–3323. Cited by: §1, §3.2.
-  (2021) FairFace: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1548–1558. Cited by: 2nd item, §2, §6.2.
-  (2016) Federated optimization: distributed machine learning for on-device intelligence. External Links: Cited by: §1.
-  (2019) IFair: learning individually fair data representations for algorithmic decision making. 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 1334–1345. Cited by: §1.
-  (2020) A review of applications in federated learning. Computers & Industrial Engineering 149, pp. 106854. External Links: Cited by: §2.
-  (2021) Ditto: fair and robust federated learning through personalization. In International Conference on Machine Learning, pp. 6357–6368. Cited by: §1.
-  FEDERATED optimization in heterogeneous networks. Cited by: §2.
-  (2020) Fair resource allocation in federated learning. In International Conference on Learning Representations, External Links: Cited by: §1.
-  (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738. Cited by: 1st item, §6.2.
-  (2020) Fairalm: augmented lagrangian method for training fair models with little regret. In European Conference on Computer Vision, pp. 365–381. Cited by: §1, §2, §3.3, §6.2.
-  (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. Cited by: §2, item 2.
-  (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. Cited by: §1.
-  (2020-06) Classification of criminal recidivism using machine learning techniques. pp. 5110–5122. Cited by: §1.
-  (2021) Federated learning meets fairness and differential privacy. arXiv e-prints, pp. arXiv–2108. Cited by: §1.
-  (2020) FNNC: achieving fairness through neural networks. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,IJCAI-20, International Joint Conferences on Artificial Intelligence Organization, Cited by: §2, §3.3, §3.3.
-  (2021) Federated evaluation and tuning for on-device personalization: system design & applications. arXiv preprint arXiv:2102.08503. Cited by: §1.
-  (2018) Federated learning for ultra-reliable low-latency v2v communications. In 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–7. Cited by: §2.
Fairness gan: generating datasets with fairness properties using a generative adversarial network. IBM Journal of Research and Development 63 (4/5), pp. 3–1. Cited by: §1, §2.
-  (2017) Two decades of recommender systems at amazon.com. IEEE Internet Computing 21 (3), pp. 12–18. External Links: Cited by: §1.
-  (2020) Differentially private and fair deep learning: a lagrangian dual approach. arXiv preprint arXiv:2009.12562. Cited by: §1, §2, §3.3.
-  (2021-02) Federated machine learning: survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Communications Surveys & Tutorials 23, pp. . External Links: Cited by: §1, item 3.
-  (2019) In-edge ai: intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Network 33 (5), pp. 156–165. Cited by: §2.
-  (2018) Applied federated learning: improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903. Cited by: §1, §2, 2nd item.
-  (2017) Fairness constraints: mechanisms for fair classification. In Artificial Intelligence and Statistics, pp. 962–970. Cited by: §2.
-  (2020) FairFL: a fair federated learning approach to reducing demographic bias in privacy-sensitive classification models. In 2020 IEEE International Conference on Big Data (Big Data), pp. 1051–1060. External Links: Cited by: §1, §2.
Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5810–5818. Cited by: 2nd item, §6.2.
-  (2019) Inherent tradeoffs in learning fair representations. Advances in neural information processing systems 32, pp. 15675–15685. Cited by: §1, §3.2.
-  (2021) Applications of federated learning in smart cities: recent advances, taxonomy, and open challenges. Connection Science, pp. 1–28. Cited by: §2.
-  (2018) Real-time data processing architecture for multi-robots based on differential federated learning. In 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pp. 462–471. Cited by: §2.
Appendix A Training Models: Validation Loss and Fairness Violations
a.1 Fairness over Epochs
Baselines. For each baseline, we plot validation loss, , , and calculated on the validation set after every aggregation.
Note that only FedAvg- is trained using , which minimizes fairness loss. Thus, the fairness loss values are lower compared to the rest of the baseline models, as observed in Figure 6 (rows 2, 3, & 4). We also observe that the fairness loss values in FedAvg- show a clear convergence as training proceeds, and the final loss obtained is significantly less. Such a trend is not apparent in other models.
Heuristics. We plot the validation and fairness losses as calculated on the validation set to validate each of the proposed heuristics. We show the results for the losses , , and (rows 2, 3, & 4).
From Figure 7, we observe that the fairness loss values for all the heuristics reduce with training. And the reduction in fairness loss is gradual and smooth for FairBest and FairAccRatio. For CelebA and UTK, FairBest performs best, while for UTK, FairAvg assures better fairness. For all three datasets, we can see that FairAccRatio has better accuracy and fairness trade-off.