Federating for Learning Group Fair Models

10/05/2021
by   Afroditi Papadaki, et al.
Duke University
UCL
8

Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models. In this work, we study minmax group fairness in paradigms where different participating entities may only have access to a subset of the population groups during the training phase. We formally analyze how this fairness objective differs from existing federated learning fairness criteria that impose similar performance across participants instead of demographic groups. We provide an optimization algorithm – FedMinMax – for solving the proposed problem that provably enjoys the performance guarantees of centralized learning algorithms. We experimentally compare the proposed approach against other methods in terms of group fairness in various federated learning setups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/20/2022

Minimax Demographic Group Fairness in Federated Learning

Federated learning is an increasingly popular paradigm that enables a la...
10/02/2021

FairFed: Enabling Group Fairness in Federated Learning

As machine learning becomes increasingly incorporated in crucial decisio...
10/29/2021

Improving Fairness via Federated Learning

Recently, lots of algorithms have been proposed for learning a fair clas...
11/09/2021

Unified Group Fairness on Federated Learning

Federated learning (FL) has emerged as an important machine learning par...
09/06/2021

Fair Federated Learning for Heterogeneous Face Data

We consider the problem of achieving fair classification in Federated Le...
08/27/2020

Collaborative Fairness in Federated Learning

In current deep learning paradigms, local training or the Standalone fra...
09/17/2021

Achieving Model Fairness in Vertical Federated Learning

Vertical federated learning (VFL), which enables multiple enterprises po...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning models are being increasingly adopted to make decisions in a range of domains, such as finance, insurance, medical diagnosis, recruitment, and many more [10.1145/3376898]. Therefore, we are often confronted with the need – sometimes imposed by regulatory bodies – to ensure that such machine learning models do not lead to decisions that discriminate individuals from a certain demographic group.

The development of machine learning models that are fair across different (demographic) groups has been well studied in traditional learning setups where there is a single entity responsible for learning a model based on a local dataset holding data from individuals of the various groups. However, there are various settings where the data representing different demographic groups is spread across multiple entities rather than concentrated on a single entity/server. For example, consider a scenario where various hospitals wish to learn a diagnostic machine learning model that is fair (or performs reasonably well) across different demographic groups but each hospital may only contain training data from certain groups because – in view of its geo-location – it serves predominantly individuals of a given demographic [DBLP:journals/corr/abs-2108-08435]. This new setup along with the conventional centralized one are depicted in Figure 1.

These emerging scenarios however bring about various challenges. The first challenge relates to the fact that each individual entity may not be able to learn locally by itself a fair machine learning model because it may not hold (or hold little) data from certain demographic groups; The second relates to that fact that each individual entity may also not be able to directly share their own data with other entities due to legal or regulatory challenges such as GDPR [EUdataregulations2018]. Therefore, the conventional machine learning fairness ansatz – relying on the fact that the learner has access to the overall data – does not generalize from the centralized data setup to the new distributed one.

It is possible to partially address these challenges by adopting federated learning (FL) approaches. These learning approaches enable multiple entities (or clients111Clients are different user devices, organisations or even geo-distributed datacenters of a single company [advancesFL]. In this manuscript we use the terms participants, clients and entities, interchangeably.) coordinated by a central server to iteratively learn in a decentralized manner a single global model to carry out some task [DBLP:journals/corr/KonecnyMRR16, DBLP:journals/corr/KonecnyMYRSB16]. The clients do not share data with one another or with the server; instead the clients only share focused updates with the server, the server then updates a global model, and distributes the updated model to the clients, with the process carried out over multiple rounds or iterations. This learning approach enables different clients with limited local training data to learn better machine learning models.

However, with the exception of [DBLP:journals/corr/abs-2108-08435], which we will discuss later, federated learning is not typically used to learn models that exhibit performance guarantees for different demographic groups served by a client (i.e. group fairness guarantees); instead, it is primarily used to learn models that exhibit specific performance guarantees for each client involved in the federation (i.e. client fairness guarantees). Importantly, in view of the fact that a machine learning model that is client fair is not necessarily group fair (as we later demonstrate formally in this work), it becomes crucial to understand how to develop new federated learning techniques leading up to models that are also fair across different demographic groups.

This work develops a new federated learning algorithm that can be adopted by multiple entities coordinated by a single server to learn a global (minimax) group fair model. We show that our algorithm leads to the same (minimax) group fairness performance guarantees of centralized approaches such as [diana2020convergent, martinez2020minimax], which are exclusively applicable to settings where the data is concentrated in a single client. Interestingly, this also applies to scenarios where certain clients do not hold any data from some of the groups.

The rest of the paper is organized as follows: Section 2 overviews related work. Section 3 formulates our proposed distributed group fairness problem. Section 4 formally demonstrates that traditional federated learning approaches such as [DBLP:journals/corr/abs-2108-08435, DRFA, Li2020Fair, AFL] may not always solve group fairness. In Section 5 we propose a new federated learning algorithm to collaboratively learn models that are minimax group fair. Section 6 illustrates the performance of our approach in relation to other baselines. Finally, Section 7 draws various conclusions.

2 Related Work

The development of fair machine learning models in the standard centralized learning setting – where the learner has access to all the data – is underpinned by fairness criteria. One popular criterion is individual fairness [dwork2011fairness] that dictates that the model is fair provided that people with similar characteristics/attributes are subject to similar model predictions/decisions.

Another family of criteria – known as group fairness

– requires the model to perform similarly on different demographic groups. Popular group fairness criteria include equality of odds, equality of opportunity

[hardt2016equality], and demographic parity [louizos2017variational], that are usually imposed as a constraint within the learning problem. More recently, [martinez2020minimax] introduced minimax group fairness; this criterion requires the model to optimize the prediction performance of the worst demographic group without unnecessarily impairing the performance of other demographic groups (also known as no-harm fairness) [diana2020convergent, martinez2020minimax].

Figure 1: Centralized Learning vs. Federated Learning group fairness. Left: A single entity holds the dataset in a single server that is responsible for learning a model parameterized by . Right: Multiple entities hold different datasets , sharing restricted information with a server that is responsible for learning a model parameterized by . See also Section 3.

The development of fair machine learning models in federated learning settings has been building upon the group fairness literature. However, the majority of these works has concentrated predominantly on the development of algorithms leading to models that exhibit similar performance across different clients rather than models that exhibit similar performance across different demographic groups [Li2020Fair].

One such approach is agnostic federated learning (AFL) [AFL], whose aim is to learn a model that optimizes the performance of the worst performing client. Another FL approach proposed in [Li2020Fair], extends AFL by adding an extra fairness constraint to flexibly control performance disparities across clients. Similarly, tilted empirical risk minimization [li2021tilted]

uses a hyperparameter called tilt to enable fairness or robustness by magnifying or suppressing the impact of individual client losses, respectively. FedMGDA

is an algorithm that combines minimax optimization coupled with Pareto efficiency [micro_theory] and gradient normalization to ensure fairness across users and robustness against malicious clients. See also other related works in [ditto].

A very recent federated learning work, namely FCFL [DBLP:journals/corr/abs-2108-08435], focuses on improving the worst performing client while ensuring a certain level of local group fairness by employing gradient-based constrained multi-objective optimization in order to address the proposed challenge.

Our work concentrates on developing a federated learning algorithm for guaranteeing fairness across all demographic groups included across clients datasets. It therefore naturally departs from existing federated learning approaches such as AFL [AFL], FedMGDA [fedmgda+] and -FFL [Li2020Fair] that focus on across-client fairness since, as we prove in Section 4, a model that guarantees client fairness only guarantees group fairness under some special conditions.

It also departs from FCFL [DBLP:journals/corr/abs-2108-08435], which considers group fairness to be a per-client objective associated only to the locally available groups. Our primary goal is to learn a model solving (demographic) group fairness across any groups included in the clients distribution, independently of the groups representation in a particular client.

3 Problem Formulation

3.1 Group Fairness in Centralized Machine Learning

We first describe the standard minimax group fairness problem in a centralized machine learning setting [diana2020convergent, martinez2020minimax], where there is a single entity/server holding all relevant data and responsible for learning a group fair model (see Figure 1

). We concentrate on classification tasks, though our approach also applies to other learning tasks such as regression. Let the triplet of random variables

represent input features, target, and demographic groups. Let also

represent the joint distribution of these random variables where

represents the prior distribution of the different demographic groups and their data conditional distribution.

Let

be a loss function where

represents the probability simplex. We now consider that the entity will learn an hypothesis

drawn from an hypothesis class , that solves the optimization problem given by

(1)

Note that this problem involves the minimization of the expected risk of the worst performing demographic group.

Importantly, under the assumption that the loss is a convex function w.r.t the hypothesis222This is true for the most common functions in machine learning settings such as Brier score and cross entropy. and the hypothesis class is a convex set, solving the minimax objective in Eq. 1 is equivalent to solving

(2)

where

represent the vectors in the simplex with all of their components larger than

. Note that if the inequality in Eq. 2 becomes an equality, however, allowing zero value coefficients may lead to models that are weakly, but not strictly, Pareto optimal [geoffrion1968proper, miettinen2012nonlinear].

The minimax objective over the linear combination can be achieved by alternating between projected gradient ascent or multiplicative weight updates to optimize the weights, and stochastic gradient descent to optimize the model

[chen2018, diana2020convergent, martinez2020minimax].

3.2 Group Fairness in Federated Learning

We now describe our proposed group fairness federated learning problem; this problem differs from the previous one because the data is now distributed across multiple clients but each client (or the server) do not have direct access to the data held by other clients. See also Figure 1.

In this setting, we incorporate categorical variable

to our data tuple to indicate the clients participating in the federation. The joint distribution of these variables is , where represents a prior distribution over clients – which in practice is the fraction of samples that are acquired by client relative to the total number of data samples –, , and represent the distribution of the groups and the distribution of the input and target variables conditioned on a client. We assume that the group-conditional distribution is the same across clients, meaning . Note that our model explicitly allows for the distribution of the demographic groups to depend on the client, accommodating for the fact that certain clients may have a higher (or lower) representation of certain demographic groups over others.

We now aim to learn a model that solves the minimax fairness problem as presented in Eq. 1

, but considering that the group loss estimates are split into

estimators associated to each client. We therefore re-express the linear weighted formulation of Eq. 2 using importance weights, allowing to incorporate the role of the different clients, as follows:

(3)

where is the expected client risk and denotes the importance weight for a particular group.

However, there is an immediate non-trivial challenge that arises within this proposed federated learning setting in relation to the centralized one described earlier: we need to devise an algorithm that solves the objective in Eq. 3 under the constraint that the different clients cannot share their local data with the server or with one another, but – in line with conventional federated learning settings [DRFA, Li2020Fair, DBLP:journals/corr/McMahanMRA16, AFL]– only local client updates of a global model (or other quantities such as local risks) are shared with the server.

4 Client Fairness vs. Group Fairness in Federated Learning

Prior proposing a federated learning algorithm to solve our proposed group fairness problem, we first reflect whether a model that solves the more widely used client fairness objective in federated learning settings given by [AFL]:

(4)

where we let denote a joint data distribution over the clients, also solves our proposed minimax group fairness objective given by:

(5)

where we let denote a joint data distribution over sensitive groups.

The following lemma illustrates that a model that is minimax fair with respect to the clients is equivalent to a relaxed minimax fair model with respect to the (demographic) groups.

Lemma 1.

Let denote a matrix whose entry in row and column is (i.e. the prior of group in client ). Then, given a solution to the minimax problem across clients

(6)

that is solution to the following constrained minimax problem across sensitive groups

(7)

where the weighting vector is constrained to belong to the simplex subset defined by . In particular, if the set : , then , and the minimax fairness solution across clients is also a minimax fairness solution across demographic groups.

Lemma 1 proves that being minimax with respect to the clients is equivalent to finding the group minimax model constraining the weighting vectors to be inside the simplex subset . Therefore, if this set already contains a group minimax weighting vector, then the group minimax model is equivalent to client minimax model. Another way to interpret this result is that being minimax with respect to the clients is the same as being minimax for any group assignment such that linear combinations of the groups distributions are able to generate all clients distributions, and there is a group minimax weighting vector in .

Being minimax at the client and group level relies on containing the minimax weighting vector. In particular, if for each sensitive group there is a client comprised entirely of this group ( contains a identity block), then and group and client level fairness are guaranteed to be fully compatible. Another trivial example is when at least one of the client’s group priors is equal to a group minimax weighting vector. This result also suggests that client level fairness may also differ from group level fairness. This motivates us to develop a new federated learning algorithm to guarantee group fairness that – where the conditions of the lemma hold – also results in client fairness. We experimentally validate the insights deriving from Lemma 1 in Section 6.

5 MinMax Group Fairness Federating Learning Algorithm

We now propose an algorithm – Federated Minimax (FedMinMax) – to solve the group fairness problem in Eq. 3.

Clearly, the clients are not able to calculate the statistical averages appearing in Eq. 3 because the underlying data distributions are unknown. Therefore, we let each client have access to a dataset containing various data points drawn i.i.d according to . We also define three additional sets: (a) is a set containing all data examples associated with group in client ; (b) is the set containing all data examples associated with group across the various clients; and (c) is containing all data examples across groups and across clients.

Note again that – in view of our modelling assumptions – it is possible that can be empty for some and some implying that such a client does not have data realizations for such group.

We will also let the model be parameterized via a vector of parameters , i.e. . 333

This vector of parameters could correspond to the set of weights / biases in a neural network.

Then, one can approximate the relevant statistical risks using empirical risks as follows:

(8)

where , , , , , and . Note that is an estimate of , is an estimate of , and is an estimate of .

We consider the importance weighted empirical risk since the clients do not have access to the data distribution but instead to a dataset with finite samples.

Therefore, the clients in coordination with the central server attempt to solve the optimization problem given by:

(9)

Input: : Set of clients, total number of communication rounds, : model learning rate, : global adversary learning rate, : set of examples for group in client , and .

1.35
1:  Server initializes and randomly.
2:  for  to  do
3:      Server computes
4:      Server broadcasts ,
5:      for each client in parallel  do
6:          
7:          Client- obtains and sends and to server
8:      end for
9:      Server computes:
10:      Server updates
11:  end for

Outputs:

Algorithm 1 Federated MiniMax (FedMinMax)

The objective in Eq. 9 can be interpreted as a zero-sum game between two players: the learner aims to minimize the objective by optimizing the model parameters and the adversary seeks to maximize the objective by optimizing the weighting coefficients .

We use a non-stochastic variant of the stochastic-AFL algorithm introduced in [AFL]. Our version, provided in Algorithm 1, assumes that all clients are available to participate in each communication round . In particular, in each round , the clients receive the latest model parameters , the clients then perform one gradient descent step using all their available data, and the clients then share the updated model parameters along with certain empirical risks with the server. The server (learner) then performs a weighted average of the client model parameters .

The server also updates the weighting coefficient using a projected gradient ascent step in order to guarantee that the weighting coefficient updates are consistent with the constraints. We use the Euclidean algorithm proposed in [10.1145/1390156.1390191] in order to implement the projection operation ().

We can also show that our algorithm can exhibit convergence guarantees.

Lemma 2.

Consider our federated learning setting (Figure 1, right) where each entity has access to a local dataset and a centralized machine learning setting (Figure 1, left) where there is a single entity that has access to a single dataset (i.e. this single entity in the centralized setting has access to the data of the various clients in the distributed setting).

Then, Algorithm 1 and Algorithm 2 (in supplementary material, Appendix B) lead to the same global model provided that learning rates and model initialization are identical.

This lemma shows that our federated learning algorithm inherits any convergence guarantees of existing centralized machine learning algorithms. In particular, assuming that one can model the single gradient descent step using a -approximate Bayesian Oracle [chen2018], we can show that a centralized algorithm converges and hence our FedMinMax one also converges too (under mild conditions on the loss function, hypothesis class, and learning rates). See Theorem 7 in  [chen2018].

6 Experimental Results

Setting Method Worst Group Risk Best Group Risk
ESG AFL 0.4850.0 0.2160.001
FedAvg 0.4870.0 0.2140.002
-FedAvg (=0.2) 0.4790.002 0.220.002
-FedAvg (=5.0) 0.4780.002 0.2230.004
FedMinMax (ours) 0.4510.0 0.310.001
SSG AFL 0.4510.0 0.310.001
FedAvg 0.4830.002 0.2190.001
-FedAvg (=0.2) 0.4760.001 0.2210.002
-FedAvg (=5.0) 0.4680.005 0.2740.004
FedMinMax (ours) 0.4510.0 0.3090.003
Centalized Minmax Baseline 0.4510.0 0.3080.001
Table 1: Testing brier score risks for FedAvg, AFL, -FedAvg and FedMinmax across different federated learning scenarios on the synthetic dataset for binary classification involving two sensitive groups. PSG scenario is not included because for it is equivalent to SSG.

To validate the benefits of the proposed FedMinMax approach, we consider three federated learning scenarios: (1) Equal access to Sensitive Groups (ESG), where every client has access to all sensitive groups but does not have enough data to train a model individually, to examine a case where group and client fairness are not equivalent; (2) Partial access to Sensitive Groups (PSG) where each client has access to a subset of the available groups memberships, to compare the performances when there is low or no local representation of particular groups; (3) access to a Single Sensitive Group (SSG), each client holds data from one sensitive group, for showcasing the group and client fairness objectives equivalence derived from Lemma 1.

In all experiments we consider a federation consisting of 40 clients and a single server that orchestrates the training procedure per Algorithm 1. We benchmark our approach against AFL [AFL], -FedAvg [Li2020Fair], and FedAvg [DBLP:journals/corr/McMahanMRA16]. Further, as a baseline, we also run FedMinMax with one client (akin to centralized ML) to confirm Lemma 2.

We generated a synthetic dataset for binary classification involving two sensitive groups (i.e. ), details available in Appendix B. We provide the performance on ESG and SSG scenarios444Note that PSG scenario is valid only for datasets where , else its equivalent to SSG setting. in Table 1. FedMinMax performs similarly to Centralized Minmax Baseline for both sensitive groups in all setups, as proved in Lemma 2. AFL yields the same solution as FedMinMax and Centralized Minmax Baseline only in SSG where group fairness is implied by client fairness. Both FedAvg and -FedAvg fail to achieve minimax group fairness.

Setting Method T-shirt Pullover Shirt
ESG AFL 0.2390.003 0.2620.001 0.4940.004
FedAvg 0.2430.003 0.2620.001 0.4920.003
FedMinMax (ours) 0.2610.006 0.2560.027 0.3070.01
SSG AFL 0.2670.009 0.2360.013 0.3070.003
FedAvg 0.2270.003 0.2360.004 0.4630.003
FedMinMax (ours) 0.2690.012 0.2380.017 0.3090.011
PSG AFL 0.2440.007 0.2570.066 0.4250.019
FedAvg 0.2290.008 0.2360.004 0.4640.011
FedMinMax (ours) 0.2630.013 0.2280.011 0.310.008
Centalized Minmax Baseline 0.2590.01 0.2390.051 0.3110.006
Table 2: Testing brier score risks for the top-3 worst groups in FashionMNIST dataset. The risks for all the available classes are available in Table 4, in Appendix B.

For FashionMNIST we use all ten clothing target categories, which we assign both as targets and sensitive groups (i.e. ). In Table 2 we examine the performance on the three worst categories: T-shirt, Pullover, and Shirt (the risks for all classes are available in Table 4, in Appendix B). In all settings, FedMinMax improves the risk of the worst group, Shirt, more than it degrades the performance on the T-shirt class, all while maintaining the same risk on Pullover as FedAvg. AFL performs akin to FedMinMax for the SSG setup but not on the other settings as expected by Lemma 1. Note that FedMinMax has the best worst group performance in all settings as expected. More details about datasets, models, experiments and results are provided in Appendix B.

7 Conclusion

In this work, we formulate (demographic) group fairness in federated learning setups where different participating entities may only have access to a subset of the population groups during the training phase (but not necessarily the testing phase), exhibiting minmax fairness performance guarantees akin to those in centralized machine learning settings.

We formally show how our fairness definition differs from the existing fair federated learning works, offering conditions under which conventional client-level fairness is equivalent to group-level fairness. We also provide an optimization algorithm, FedMinMax, to solve the minmax group fairness problem in federated setups that exhibits minmax guarantees akin to those of minmax group fair centralized machine learning algorithms.

We empirically confirm that our method outperforms existing federated learning methods in terms of group fairness in various learning settings and validate the conditions that the competing approaches yield the same solution as our objective.

References

Appendix A Appendix: Proofs

Lemma.

1 Let denote a matrix whose entry in row and column is (i.e. the prior of group in client ). Then, given a solution to the minimax problem across clients

(10)

that is solution to the following constrained minimax problem across sensitive groups

(11)

where the weighting vector is constrained to belong to the simplex subset defined by . In particular, if the set : , then , and the minimax fairness solution across clients is also a minimax fairness solution across demographic groups.

Proof.

The objective for optimizing the global model for the worst mixture of client distributions is:

(12)

given that . Since with being the prior of for client , and is the distribution conditioned on the sensitive group , Eq. (12) can be re-written as:

(13)

Where we defined , , this creates the vector . It holds that the set of possible vectors satisfies , since , with and .

Then, from the equivalence in Equation 13 we have that, given any solution

(14)

then is solution to

(15)

and

(16)

In particular, if the space defined by contains any group minimax fair weights, meaning that the set : is not empty, then it follows that any (solution to Equation 15) is already minimax fair with respect to the groups . And therefore the client-level minimax solution is also a minimax solution across sensitive groups.

Lemma.

2 Consider our federated learning setting (Figure 1, right) where each entity has access to a local dataset and a centralized machine learning setting (Figure 1, left) where there is a single entity that has access to a single dataset (i.e. this single entity in the centralized setting has access to the data of the various clients in the distributed setting).

Then, Algorithm 1 and Algorithm 2 (in supplementary material, Appendix B) lead to the same global model provided that learning rates and model initialization are identical.

Proof.

We will show that FedMinMax, in Algorithm 1 is equivalent to the centralized algorithm, in Algorithm 2 under the following conditions:

  1. the dataset on client , in FedMinMax is and the dataset in centralized MinMax is , and

  2. the model initialization , the number of adversarial rounds ,555In the federated Algorithm 1, we also refer to the adversarial rounds as communication rounds., learning rate for the adversary and learning rate for the learner , are identical for both algorithms.

This can then be immediately done by showing that steps lines 3-7 in Algorithm 1 are entirely equivalent to step 3 in Algorithm 2. In particular, note that we can write:

(17)

because

(18)

Therefore, the following model update:

(19)

associated with step in 7, at round of Algorithm 1, is entirely equivalent to the model update

(20)

associated with step in line 3 at round of Algorithm 2, provided that is the same for both algorithms.

It follows therefore by induction that, provided the initialization and learning rate are identical in both cases the algorithms lead to the same model. Also, from Eq. 18, we have that the projected gradient ascent step in line 4 of Algorithm 2 is equivalent to the step in line 10 of Algorithm 1. ∎

Appendix B Appendix: Experiments

b.1 Experimental Details

Datasets.

For the experiments we use the following datasets:

  • Synthetic. Let and

    be the normal and Bernoulli distributions. The data were generated assuming the group variable

    , the input features variable and the target variable , where is the optimal hypothesis for group . We select . As illustrated in Figure 2, left side, the optimal hypothesis is equal to the optimal model for group .

  • FashionMNIST. FashionMNIST is a grayscale image dataset which includes training images and 10,000 testing images. The images consist of

    pixels and are classified into 10 clothing categories. In our experiments we consider each of the target categories to be a sensitive group too.

Figure 2: Illustration of the optimal hypothesis and the conditional distributions and for the generated synthetic dataset.

Experimental Setting and Model Architectures.

For all the datasets, we use three-fold cross validation to compute the means and standard deviations of the accuracies and risks, with different random initializations. Also note that each client’s data is unique, meaning that there are no duplicated examples across clients. We assuming that every client is available to participate at each communication round for every method. For

-FedAvg we use . The learning rate of the classifier for all methods is and for the adversary in AFL and FedMinMax we use . The local iterations for FedAvg and -FedAvg are . For AFL and FedMinMax the batch size is equal to the number of examples per client while for FedAvg and -FedAvg is equal to

. For the synthetic dataset, we use an MLP architecture consisting of four hidden layers of size 512 and in the experiments for FashionMNIST we used a CNN architecture with two 2D convolutional layers with kernel size 3, stride 1 and padding 1. Each convolutional layer is followed with a maxpooling layer with kernel size 2, stride 2, dilation 1 and padding 0. All models were trained using Brier score loss function. A summary of the experimental setup is provided in Table

3.

Setting Method Batch Size Loss Hypothesis Type Epochs or
ESG,SSG AFL 0.1 Brier Score MLP - 0.1
ESG,SSG FedAvg 0.1 100 Brier Score MLP 15 -
ESG,SSG -FedAvg 0.1 100 Brier Score MLP 15 -
ESG,SSG FedMinMax (ours) 0.1 Brier Score MLP - 0.1
ESG,SSG Centalized Minmax 0.1 Brier Score MLP - 0.1
ESG,SSG,PSG AFL 0.1 Brier Score CNN - 0.1
ESG,SSG,PSG FedAvg 0.1 100 Brier Score CNN 15 -
ESG,SSG,PSG FedMinMax (ours) 0.1 Brier Score CNN - 0.1
ESG,SSG,PSG Centalized Minmax 0.1 Brier Score CNN - 0.1
Table 3: Summary of parameters used in the training process for all experiments. Epochs refers to the local iterations performed at each client, is the number of local data examples in client , is the model’s learning rate and or is the adversary learning rates.

Software & Hardware.

The proposed algorithms and experiments are written in Python, leveraging PyTorch

[NEURIPS2019_9015]. The experiments were realised using 1 NVIDIA Tesla V100 GPU.

b.2 Additional Results

Experiments on FashionMNIST.

In the Partial access to Sensitive Groups (PSG) setting, we distribute the data across 40 participants, 20 of which have access to groups T-shirt, Trouser, Pullover, Dress and Coat and the other 20 have access to Sandal, Shirt, Sneaker, Bag and Ankle Boot. The data distribution is unbalanced across clients since the size of local datasets differs among clients (i.e. ). In the Equal access to Sensitive Groups (ESG) setting, the 10 classes are equally distributed across the clients, creating a scenario where each client has access to the same amount of data examples and groups (i.e. and ). Finally, in the Single access to Sensitive Groups (SSG) setting, every client owns only one sensitive group and each group is distributed to only 4 clients. Again, the local datasets are different, , creating an unbalanced data distribution.

Figure 3: Worst Group, Best Group and Average accuracies for AFL, FedAvg and FedMinmax across different federated learning scenarios on the FashionMNIST dataset.

We show a comparison of the worst group Shirt, the best group Trousers and the average accuracies in Figure 3. FedMinMax enjoys a similar accuracy to the Centralized Minmax Baseline, as expected. AFL has similar performance FedMinMax in SSG, where across client fairness implies group fairness, in line with Lemma 1, and FedAvg has similar worst, best and average accuracy, across federated settings. An extended version of group risks is shown in Table 4.

Setting Method T-shirt Trouser Pullover Dress Coat Sandal Shirt Sneaker Bag Ankle boot
ESG AFL 0.2390.003 0.0460.0 0.2620.001 0.1590.001 0.2520.004 0.060.0 0.4940.004 0.0670.001 0.0490.0 0.070.001
FedAvg 0.2430.003 0.0460.0 0.2620.001 0.1580.003 0.2530.002 0.0610.0 0.4920.003 0.0680.0 0.0490.0 0.0690.0
FedMinMax (ours) 0.2610.006 0.1910.016 0.2560.027 0.2170.013 0.2230.031 0.2070.027 0.3070.01 0.1720.016 0.1930.021 0.1560.011
SSG AFL 0.2670.009 0.1940.023 0.2360.013 0.2260.012 0.2620.012 0.2010.026 0.3070.003 0.1780.033 0.2050.025 0.1620.021
FedAvg 0.2270.003 0.0390.001 0.2360.004 0.1430.003 0.2320.003 0.0510.001 0.4630.003 0.0670.0 0.0410.0 0.0630.001
FedMinMax (ours) 0.2690.012 0.20.026 0.2380.017 0.2310.013 0.2520.034 0.20.024 0.3090.011 0.1770.03 0.2050.032 0.1690.013
PSG AFL 0.2440.007 0.0320.001 0.2570.066 0.1220.006 0.2090.098 0.0450.002 0.4250.019 0.0590.001 0.0410.001 0.0620.001
FedAvg 0.2290.008 0.0390.0 0.2360.004 0.1420.002 0.2320.003 0.0520.001 0.4640.011 0.0670.001 0.0420.001 0.0630.001
FedMinMax (ours) 0.2630.013 0.1770.026 0.2280.011 0.210.019 0.2380.025 0.1820.03 0.310.008 0.160.027 0.1840.031 0.1540.018
Centalized Minmax Baseline 0.2590.01 0.1730.015 0.2390.051 0.2130.008 0.240.063 0.1820.024 0.3110.006 0.1680.018 0.180.013 0.1510.012
Table 4: Brier score risks for FedAvg, AFL and FedMinmax across different federated learning settings on FashionMNIST dataset. Extension of Table 2.

b.3 Centralized MinMax Algorithm

We provide the centralized version of FedMinMax in Algorithm 2.

Input: total number of adversarial rounds, : model learning rate, : adversary learning rate, : set of examples for group , .

1.35
1:  Server initializes and randomly.
2:  for  to  do
3:     Server computes
4:     Server updates
5:  end for

Outputs:

Algorithm 2 Centralized MinMax Baseline