Incentivizing Data Contribution in Cross-Silo Federated Learning

03/08/2022
by   Chao Huang, et al.
0

In cross-silo federated learning, clients (e.g., organizations) collectively train a global model using their local data. However, due to business competitions and privacy concerns, the clients tend to free-ride (i.e., not contribute enough data points) during training. To address this issue, we propose a framework where the profit/benefit obtained from the global model can be properly allocated to clients to incentivize data contribution. More specifically, we study the game-theoretical interactions among the clients under three widely used profit allocation mechanisms, i.e., linearly proportional (LP), leave-one-out (LOO), and Shapley value (SV). We consider two types of equilibrium structures: symmetric and asymmetric equilibria. We show that the three mechanisms admit an identical symmetric equilibrium structure. However, at asymmetric equilibrium, LP outperforms SV and LOO in incentivizing the clients' average data contribution. We further discuss the impact of various parameters on the clients' free-riding behaviors.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/22/2022

Incentivizing Federated Learning

Federated Learning is an emerging distributed collaborative learning par...
10/31/2011

Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms

Reputation mechanisms offer an effective alternative to verification aut...
06/22/2021

Enabling Long-Term Cooperation in Cross-Silo Federated Learning: A Repeated Game Perspective

Cross-silo federated learning (FL) is a distributed learning approach wh...
07/29/2020

Dynamic Federated Learning Model for Identifying Adversarial Clients

Federated learning, as a distributed learning that conducts the training...
05/26/2022

Federated Split BERT for Heterogeneous Text Classification

Pre-trained BERT models have achieved impressive performance in many nat...
06/15/2022

Global Convergence of Federated Learning for Mixed Regression

This paper studies the problem of model training under Federated Learnin...
01/03/2019

The market nanostructure origin of asset price time reversal asymmetry

We introduce a framework to infer lead-lag networks between the states o...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Federated learning (FL) is a decentralized machine learning paradigm where multiple clients collaboratively train a global model under the orchestration of a central server

[15]. Clients can keep their data local and only upload model updates (e.g., represented by parameters or gradients) trained by their data. There are two types of FL: cross-device FL and cross-silo FL [15]. In cross-device FL, clients are usually small distributed entities (e.g., smartphones, wearables, and edge devices), with each having a relatively small amount of local data. Hence, for cross-device FL to succeed, it requires many edge devices to participate in the training process. In cross-silo FL, however, clients are companies or organizations (e.g., hospitals and banks). The number of participants is small, and each client is expected to participate in the entire training process.

The focus of this paper is cross-silo FL. The practical industrial examples of cross-silo FL abound. In the medical and health care domain, Owkin collaborates with pharmaceutical companies to train models for drug discovery based on highly sensitive screening datasets [23]. FeatureCloud utilizes FL to perform biomedical data analysis (e.g., COVID) [6]. In the financial area, WeBank and Swiss Re cooperatively conduct data analysis and provide financial and insurance services [26].

The success of a global model requires the clients to contribute sufficient data for local model training. In cross-silo FL, however, the organizations can be not only collaborators but also business competitors. Clients may free-ride (i.e., not use enough data for training), which jeopardizes the performance of the global model. Further, since the training data in cross-silo FL can be highly-sensitive (e.g., medical records and financial data), the clients may have substantial privacy concerns. Even if FL does not directly expose clients’ local data, recent research has shown that FL can still be vulnerable to various attacks when communicating model parameters [24, 29]. Such privacy leakage also prevents clients from using enough data for training.

In fact, recent work has studied the free rider phenomenon in FL [7, 12]. For example, the work in [7] establishes an approach to inspect the clients’ distribution for the detection of free-rider attacks. Free riding is detrimental to the global model accuracy, leading to the failure of cross-silo FL. Further, the detected free riding behavior can damage the willingness of other participating clients to cooperate in the future. We aim to analyze and address the free-riding issue in this paper.

To this end, we propose a framework where the profit/benefit obtained from the global model can be properly allocated to clients to incentivize data contribution. More specifically, we consider that after the cross-silo FL terminates, the central server (or a trusted third party) can generate some profits/benefits from the global model (e.g., via selling it in a model trading market or via contracts among clients). The profit amount is positively correlated to the global model accuracy [14]. Then, the central server properly allocates the profits among clients using predefined mechanisms. It is important to note that the central server does not need to decide the profit allocation mechanism. In practice, the organizations themselves can negotiate a profit allocation mechanism and enforce its implementation via a contract [30].

A reasonable profit allocation mechanism should reward those who contribute more data in training to incentivize data contribution and eliminate free riders. In this paper, we consider three widely adopted profit allocation mechanisms, i.e., linearly proportional (LP) where each client’s contribution is measured proportionally to its data size [31]; leave-one-out (LOO) where the contribution is calculated by the difference of the global model accuracy between with and without the client’s participation [1]; and Shapley value (SV) where the contribution is measured by the average marginal influence on the global model accuracy [5].

This paper aims to answer the following key questions:

Key Question 1: Given a mechanism, how will the clients decide their data sizes used for training when they are business competitors and/or have privacy concerns?

Key Question 2: Which mechanism has the best performance in terms of incentivizing clients’ data contribution (i.e., addressing the free-rider issue) in cross-silo FL?

To answer Question 1, We formulate the interactions between the cross-silo FL clients as a game and analyze its equilibrium. To answer Question 2, we analyze and compare the equilibria under the three mechanisms. The clients’ equilibrium data contribution under different mechanisms sheds light on which mechanism incentivizes most data contribution and hence best addresses the free rider issue.

The key contributions of this paper are listed as follows:

  • Incentivizing Data Contribution via Profit Allocation in Cross-Silo FL: To the best of our knowledge, this is the first paper that systematically studies how to incentivize clients’ data contribution via appropriate profit allocation in cross-silo FL. To this end, we formulate the client interactions as a novel data contribution game and analyze its equilibrium.

  • Novel Equilibrium Analysis: We analyze both the symmetric and asymmetric equilibria of the game. We derive closed-form solutions to symmetric equilibrium. For the asymmetric case, the problem is a challenging non-convex program. We address this issue by proposing a distributed algorithm to compute the equilibrium.

  • Performance Comparison: We compare the performance of three widely used mechanisms, i.e., , , and . We show that the three mechanisms achieve the same symmetric equilibrium structure. However, at asymmetric equilibrium, has superior performance than both and in encouraging data contribution and eliminating free riders.

  • Practical Insights: We derive useful insights that are robust to both the theoretical analysis and numerical results. We show that the equilibrium data contribution decreases with the number of clients and the privacy sensitivity (i.e., privacy cost per data sample). Moreover, we show that the clients’ data contribution increases with their data capacity at equilibrium.

The remainder of this paper is as follows. We review related work in Section II. We introduce the system model in Section III. We analyze the game and propose a distributed algorithm to compute the equilibrium in Section IV. We show numerical results in Section V. We discuss challenges and opportunities in Section VI and conclude in Section VII.

Ii Related Work

Cross-Silo FL While existing studies focus on cross-device FL, some recent works analyzed cross-silo FL. The work in [21] studies the topology design to maximize the number of communication rounds per unit time. The work in [9] combines additively homomorphic secure summation protocols with differential privacy to preserve strict privacy. The study in [13]

proposes a heuristic approach to address the non-i.i.d. issue. The study in

[17] devises an algorithm FedKT that enables few-shot cross-silo FL. However, none of these works analyzes how to address the free-rider issues.

Free Rider Issue There are some existing studies on identifying the free-rider issue in FL [20, 12, 7]. For example, the work in [7] establishes an approach to inspect the clients’ distribution for detecting the free-rider attacks. However, these papers focus on identifying the free rider phenomenon instead of analyzing its motivation or addressing the issue effectively. The work in [32] proposed a repeated game solution to address this issue. However, their solution requires the clients to participate in a game with infinite phases, which is unlikely in practice. Our work builds upon free rider detection and seeks to mitigate the issue within one phase.

Profit Allocation Profit allocation in FL pertains to compensating the clients for contributing their quality data for model training. To achieve this, the central server needs to effectively and fairly evaluate the clients’ contribution. Existing studies mainly focus on three types of contribution evaluation/profit allocation mechanisms: linear proportional [31], leave-one-out [30, 1], and Shapley-value-based methods [25, 5, 8]. However, these papers focus on deriving efficient and fair algorithms for contribution valuation/profit allocation, rather than analyzing their impact on the clients’ free rider behaviors.

In summary, this is the first paper to study how to address the free rider issue in cross-silo FL via proper profit allocation.

Iii System Model

We first introduce a typical cross-silo federated learning process in Section III-A. Next, we present the model for the clients’ decisions and payoff functions in Section III-B.

Iii-a A Cross-Silo Federated Learning Process

We consider a set of clients (e.g., hospitals) who aim to collectively train a global model. Each client owns a local data set which consists of data points. In this paper, we assume that the data are i.i.d. across all clients [32, 4]. This is reasonable in cases where different hospitals train a diagnosis model for the same diseases. Considering i.i.d. data also serves as a starting point and enables closed-form insights that might go beyond the i.i.d. case. We leave the study of non-i.i.d. data into future work.

Each client can strategically choose a subset of local data for model training, and we use to denote the size of the chosen data set. Let denote the -th data sample in

. In cross-silo federated learning, the clients use their chosen local data to train a global model represented by a weight vector

. We define:

  • : the loss function for

    under .

  • : the expected local loss function of client averaged over for all .

  • : the global loss function with the following form:

    (1)

The clients seek to derive the optimal weights of the global model that minimize the global loss function in (1):

(2)

To derive , the cross-silo FL proceeds in multiple rounds. In this paper, we consider that the server adopts the widely used FedAvg algorithm to find [24].

In each round , the clients perform the following steps:

  1. The clients download the global model generated from the previous training round.

  2. The clients execute local trainings with

    over the chosen data sets via mini-batch stochastic gradient descent (SGD)

    [16].

  3. The clients derive the updated local models and send them to the server for global aggregation:

    (3)

Assumptions We assume that the server knows each client’s used data size (i.e., the weight in the aggregation in Eq. (3)). This is a widely adopted assumption for FedAvg [3, 18, 24]. Further, we assume that clients will truthfully report their local models to the server, as untruthful reporting can be detected using the trusted execution environments proposed in [33]. Moreover, we assume that each client will use the same data size throughout the entire training. Note that the truthful reporting of model updates and the faithful training using the chosen data size can be specified and enforced by a contract signed by clients.

Iii-B Clients’ Decision and Payoff

In this subsection, we define each client’s strategy and payoff function.

Iii-B1 Client Data Contribution Strategy

Each client chooses how many data samples to use in the local training, and the global model accuracy relies on all clients’ data contributions.111Since data are i.i.d., it suffices to consider client ’s chosen data size when evaluating its impact on the global model. Let be a discrete variable denoting client ’s data contribution level.

Let be the number of training rounds and be the global model parameters after rounds of training. Then, the global model accuracy loss can be represented by , where and are the global loss values under and , respectively. According to the celebrated conclusions in [16, 32], the expected global model accuracy loss is bounded by:

(4)

where is the total batch size that all the clients use for training, i.e., . Notice that the loss upper bound decreases in . As clients invest more data in the training, the global model would achieve a higher accuracy.

We use to denote the global model accuracy loss, and it is approximated by the upper bound in (4) [32, 4]:

(5)

where .

Iii-B2 Allocated Profit for Data Contribution

As mentioned, the clients in cross-silo federated learning can be business competitors and have large privacy concerns. Hence, the clients have a natural incentive to free-ride, i.e., the clients may be selfish and do not contribute sufficient data for the training process. To alleviate this issue, we consider that the server in cross-silo FL can orchestrate the learning process and appropriately distribute the profit generated from the global model. Specifically, given the clients’ data contribution, we assume that the sever can generate a total profit (e.g., through selling it at the model trading market) as follows [14]:

(6)

Here, is a constant capturing the base profit and is the penalty term. A larger model accuracy loss corresponds to a smaller amount of profit

Let the mapping denote a general profit allocation mechanism, where is the contribution index of client . Each client obtains a proportion of the total profit calculated by . The clients compete with each other in terms of obtaining the total profit.

Iii-B3 Privacy Cost

The clients in cross-silo federated learning incur privacy costs. For example, hospitals can be highly sensitive to their patients’ personal data and hence are unwilling to contribute data for training. Notice that even if federated learning does not directly expose the detailed contents of the training data (whereas model updates are communicated), clients can still be vulnerable to various attacks such as the inference attack [10]. In such cases, the clients’ privacy can be compromised, which leads to privacy costs.

In this paper, we consider that each client incurs a linear privacy cost [19]:

(7)

where presents the privacy sensitivity of clients.

Iii-B4 Computation and Communication Costs

The local model training process consumes computational resources. The CPU energy consumption relies on various factors, such as the client’s computing chip architecture, the number of CPU cycles to perform the local training, and the CPU processing speed (in cycles per second) [28]. Since the clients in cross-silo federated learning (e.g., companies or organizations) usually have strong computational resources, we consider that the computational cost for each client is a constant .

In a cross-silo federated learning process, clients upload their local model updates to the server for aggregation, and then download the updated global model for the next training rounds [2]. Since the cross-silo FL clients are expected to have reliable transmission networks (e.g., high-speed wired networks), we consider that the communication cost for each client is a constant .

Iii-B5 Clients’ Payoff Function

We define each client ’s payoff function as

(8)

The term captures the profit allocated to client . The terms , , and represent client ’s privacy cost, computation cost, and communication cost, respectively.

Iv Data Contribution Game Analysis

In this section, we first formulate the data contribution game among the clients in Section IV-A. We then introduce various profit allocation mechanisms and analyze their equilibria in Section IV-B. We finally present a distributed algorithm to compute the equilibrium strategy in Section IV-C.

Iv-a Game Formulation

Given any profit allocation mechanism , each client in cross-silo federated learning strategically chooses its data contribution level to maximize its own payoff in (8). Since each client’s data contribution affects the global model accuracy and hence the total profit distributed to other clients, the clients interact in a game-theoretical fashion. We model the interactions between the clients as a novel data contribution game as follows:

Game 1.

(Data Contribution Game)

  • Players: the set of clients.

  • Strategies: each client decides the data contribution level for local training.

  • Payoffs: each client seeks to maximize its own payoff function defined in (8).

In Game 1, given others’ decisions , each client solves the problem blow, which determines its best response.

Problem 1.

(Client ’s Best Response Problem)

(9)
var.

We aim to derive the Nash equilibrium of Game 1, which is defined as follows.

Definition 1.

(Nash Equilibrium) A strategy profile constitutes a Nash equilibrium of Game 1, if for all and for all

(10)

At a Nash equilibrium, each client’s strategy is a best response to the strategies played by the other clients, i.e., the strategy profile is the fixed point of all clients’ best response choices. Notice that a client ’s strategy choice is the optimal solution of Problem 1, and it is a function of other clients’ strategies .

A client can be characterized by many different parameters. However, since the focus of this paper is to analyze the impact of various profit allocation mechanisms on clients’ data contribution strategy, we assume that clients are homogeneous in two dimensions. Specifically, we assume that (homogeneous data capacity) and (homogeneous privacy sensitivity). Even under this relatively simple setting, we will show that the equilibrium analysis is highly non-trivial due to the complex couplings between the clients’ decisions.

Iv-B Equilibrium Analysis

In this subsection, we analyze the clients’ equilibrium strategies under three widely adopted profit allocation mechanisms in federated learning, i.e., linearly proportional (LP), leave-one-out (LOO), and the Shapley value (SV) mechanisms. Notice that different mechanisms would lead to different calculations of clients’ contribution indexes. We use to denote client ’s contribution index under mechanism .

Next, we introduce the three mechanisms as follows:

Linearly Proportional (LP) [31]: Each client’s contribution index is proportional to its data size:

(11)

Leave-One-Out (LOO) [1]: Each client’s contribution index is measured by the difference of the global model accuracy between with and without its participation. Hence, we have:

(12)

Shapley-Value (SV) [5]

: Shapley value, named in honor of Lloyd Shapley, is a solution concept in cooperative game theory. It assigns a unique distribution of a total profit via the average marginal contribution made by each client. More specifically, the clients’ contributed indexes are calculated as follows:

(13)

where is given in (5).

For mathematical convenience, we start with analyzing the symmetric Nash equilibria of Game 1. Symmetric Nash equilibrium is a celebrated equilibrium concept where all the players use the same equilibrium strategies (e.g., prisoner’s dilemma) [22]. We characterize the clients’ symmetric equilibrium in Theorem 1.

Theorem 1.

(Proof in Appendix A) Assume that . Then, any symmetric Nash equilibrium under LP, LOO, and SV takes an identical form: where

(14)

Note that the equilibrium result in (14) is derived assuming that the data size takes a continuous value. One can round the result in (14) for practical applications.

We discuss the implications of Theorem 1 as follows:

  • Competition impact The equilibrium data contribution level decreases in the client number . As the cross-silo FL population becomes larger (i.e., greater competition), clients tend to use fewer data during training (i.e., free-riding). This is because the clients can still obtain a good amount of profit (due to more data contributed by other competitor clients) while substantially reduce the privacy cost.

  • Privacy impact The equilibrium data contribution decreases in the privacy sensitivity . As clients are more sensitive to their data privacy for participating in cross-silo FL, they will contribute fewer data for model training to reduce the privacy costs.

  • The equilibrium data contribution (weakly) increases in the data capacity . Having more data corresponds to a greater flexibility for decision making. A client can choose to contribute more to earn more profits (when the privacy cost is not too large).

A somehow counter-intuitive result from Theorem 1 is that all the mechanisms (i.e., LP, LOO, SV) yield the same equilibrium structure. However, the conditions for the equilibrium to exist vary across the three mechanisms. See Appendix A for details. One may expect that the three mechanisms lead to different equilibrium behaviors, since they calculate the clients’ contribution indexes differently. (see Eq. (11) to Eq. (13)). However, at a symmetric equilibrium where the all the clients adopt the same strategy, the profit share under different mechanisms will be the same for each client (i.e., ). This gives the same incentive to clients and hence leads to the same equilibrium structures.

Following this observation, we are able to present a more general result for any fair mechanism used for profit allocation. We first define the fair mechanism as follows.

Definition 2.

A profit allocation mechanism is said to be fair if the same data contribution levels lead to the same profit share. That is, if , then we have

(15)

Based on the definition, we have the following result.

Theorem 2.

(Proof in Appendix B) At any symmetric Nash equilibrium admitted by a fair mechanism, the clients’ strategies must have the form as in (14).

In fact, a game may have multiple equilibria and some of them may be asymmetric (i.e., different clients may choose different actions). Examples include the classical coordination game [22] and more recent crowd-sourcing game [11]. For completeness of our analysis, besides the symmetric equilibrium in Theorem 1, we further attempt to solve the asymmetric equilibrium for Game 1.

Unfortunately, when considering the asymmetric case, each client’s best response problem in Problem 1 becomes non-convex and hence is difficult to solve. This further complicates the derivation of an asymmetric equilibrium. To address this issue, we propose a distributed algorithm to compute the equilibrium in Section IV-C.

1:  Initialization Let the iteration index be . Each client starts with .
2:  repeat
3:     for each client  do
4:        Best response update: client updates its data size via solving:
5:     end for
6:     Update strategy profile: .
7:     Update iteration index: .
8:  until  converges.
Algorithm 1 Best Response Update

Iv-C Distributed Algorithm Design

In this subsection, we design a distributed algorithm for the cross-silo FL clients to compute the equilibrium strategy of Game 1. Note that the algorithm is intended to find a Nash equilibrium that is potentially asymmetric. As will be shown in Section V, the algorithm converges to different asymmetric Nash equilibria under different mechanisms.

The proposed algorithm is given in Algorithm 1, where the clients iteratively update their data sizes until the algorithm converges. Let denote the iteration index. Each client starts with a minimum data size (Line 1). We do not consider the minimum data size to be zero due to two reasons. First, the clients have to use non-zero data points to train a local model. Second, a zero data size for all clients can lead to the singularity issue when computing the payoff functions. While the algorithm does not converge, each client updates its data size based on the best response dynamics in Line 4. Then, after all the clients update their data sizes, the algorithm proceeds into the next iteration . The algorithm terminates when the absolute difference between and for all is smaller than a predefined threshold.

V Experimental Results

In this section, we conduct numerical experiments to evaluate the performance of three mechanisms, i.e., LP, LOO, and SV. More specifically, we compare the mechanisms through the lens of the clients’ data contribution at equilibrium (calculated by Algorithm 1). As will be seen, the numerical results validate our theoretical analysis. Moreover, we show that LP outperforms both LOO and SV in encouraging data contribution at asymmetric equilibria.222Since we have fully characterized the symmetric equilibria under LP, LOO, and SV in Theorem 1, we refrain from presenting numerical results for symmetric equilibria due to space limitations.

V-a Impact of Client Number

The client number crucially affects the clients’ behaviors, as it determines the fierceness of the client competition. Since clients are are companies or organizations, cross-silo FL usually has a small scale.

In this subsection, we study the impact of client number on the clients’ data contribution at equilibrium. As suggested by practical cross-silo FL experiments [27], we use . We choose from set . Fig. (a)a plots how the average data contribution (i.e., ) changes with . Fig. (b)b plots the individual client contribution (represented by a bar) at under the three mechanisms.

In Fig. (a)a, we observe that the average data contribution decreases in the client number . As more clients participate in the cross-silo FL, some of them start to take advantage of other contributing clients by using few data in training. More specifically, the contributing clients use enough data for training and achieve a satisfactory global model accuracy. The other clients, however, can incur small privacy costs by free-riding on the good model. Consequently, the average data contribution decreases in the client number . Note that this result is consistent with our theoretical analysis in Theorem 1, which we summarize as follows:

Observation 1.

The clients’ average data contribution at equilibrium decreases in the client number .

V-B Impact of Privacy Sensitivity

Privacy issue in cross-silo federated learning deserves special attention. For example, hospitals can be highly sensitive to their data (which can be patients’ medical records).

In this subsection, we study how clients’ average data contribution changes with the clients’ privacy sensitivity . For the experimental setup, we use . We choose from the set where . Fig. (a)a plots how the average data contribution depends on . Fig. (b)b plots the individual client contribution at .

In Fig.(a)a, we observe that the average data contribution at equilibrium (weakly) decreases in the privacy sensitivity (e.g., SV). A larger privacy sensitivity corresponds to a larger privacy cost per unit of data. The clients tend to use a smaller amount of data to achieve a good tradeoff between the allocated profit and the privacy costs. This result is also consistent with our theoretical analysis shown in Theorem 1. We summarize the observation below:

Observation 2.

The clients’ average data contribution at equilibrium weakly decreases in the privacy sensitivity.

(a) Average contribution vs. .
(b) Individual contribution.
Fig. 3: Impact of client number on data contribution.
(a) Average contribution vs. .
(b) Individual contribution.
Fig. 6: Impact of privacy sensitivity on data contribution.
(a) Average contribution vs. .
(b) Individual contribution.
Fig. 9: Impact of data capacity on data contribution.

V-C Impact of Data Capacity

In this subsection, we study how data capacity affects the clients’ behaviors in cross-silo federated learning. More specifically, we study how clients’ average data contribution at equilibrium depends on the data capacity . In the experiments, we set . We change from to with a step size . Fig. (a)a plots how the average contribution depends on . Fig. (b)b plots the individual client contribution at .

In Fig. (a)a, we first observe that the average data contribution (weakly) increases in the data capacity (e.g., LP). As clients possess more data points, they have greater flexibility in determining how many data to use for local training. On the one hand, contributing more data leads to a higher global model accuracy and a larger profit share. On the other hand, using more data for training also yields a greater privacy cost. Hence, when the profit gain outweighs the privacy cost, the clients will contribute more data (e.g., LP and LOO). Otherwise, the clients do not increase the data size (e.g., SV). Note that this result also validates our analysis in Theorem 1, and is summarized as follows:

Observation 3.

The clients’ average data contribution at equilibrium weakly increases in the data capacity .

It should be noticed that the conclusions drawn from Observations 1 to 3 (w.r.t. asymmetric equilibrium) share consistent trends with our theoretical analysis in Theorem 1 (w.r.t. symmetric equilibrium). This implies that the insights are robust to different equilibrium cases and hence are likely to hold in practice.

V-D Comparison of LP, LOO, and SV

In this subsection, we compare the mechanism performance under LP, LOO, and SV.

From Figs. (a)a, (a)a, and (a)a, we consistently observe that the average data contribution under LP is larger than that under SV. Meanwhile, the average contribution under SV is larger than that under LOO. The main reason is that the marginal profit share per data contribution under LP is greater than that under SV and LOO. More specifically, we can see from Eq. (11) that the client’s contribution index under LP is linearly proportional to its data size. For SV and LOO, however, the contribution index is sub-linear in the client’s data size (e.g., see Eq. (12)). That is, given the same data contribution, LP helps generate the largest contribution index for a client. As a result, the clients are incentivized to contribute the largest number of data points under LP.

From Figs. (b)b, (b)b, and (b)b, we see that LP incentivizes most clients to contribute all data (e.g., clients in Fig. (b)b); SV encourages similar moderate data contribution from all clients (e.g., in Fig. (b)b); LOO leads to most free riders who contribute very few data (i.e., clients in Fig. (b)b). As mentioned, LP incentivizes most data due to generating the largest contribution index. SV encourages similar data size since it averages over all clients’ marginal contributions. Interestingly, LOO admits a “winner-take-all” phenomenon, where one client contributes all data while others tend to free ride. Once a client uses many data for training, it can harvest near all the profits and decentivize other clients to invest data. The above observations bear significant practical implications: to mitigate the free-rider issue in cross-silo FL, one can adopt the simple linearly proportional rule. We summarize the above key results as follows:

Observation 4.

outperforms and in:

1) incentivizing the clients’ average data contribution;

2) eliminating the number of free-riders.

Vi Discussions and Future Work

Our paper takes the first step to analyze and address the free-rider issue in cross-silo FL via a game-theoretical perspective. However, there are many unaddressed challenges that merit future study.

Non-i.i.d. data We assumed i.i.d data in this paper, which is reasonable in certain cases. However, in other cases, one would expect the organizations to have non-i.i.d. and diverse data. It would be important to characterize the impact of the non-i.i.d. data on the free-rider issue in future work.

Optimal mechanism design As a first step, we studied three widely adopted profit allocation mechanisms and compared their performance in incentivizing clients’ data contribution. This further motivates the optimal mechanism design that can potentially better incentivize data contribution and eliminate free-riders in cross-silo FL.

Equilibrium selection We showed that the data contribution game has multiple equilibria. It is important to further improve the mechanism design (e.g., via signaling) so that clients follow the desirable equilibrium (e.g., where the free riding behavior is minimized).

Faithful FL We assumed that the clients faithfully execute local training using the decided data size, and that clients truthfully report their model updates. In conjunction with studies on the detection of free rider and data poisoning, it is equally important to design game-theoretical strategy-proof mechanisms where clients are best off faithfully training local models and truthfully reporting model parameters.

Various accuracy models We assumed the global model accuracy to take the form in (5). Nevertheless, our conjecture is that our proposed algorithm and analysis can be extended to other accuracy models (e.g., [31]).

Implementation We assumed public knowledge, i.e., the clients know the accuracy model in (5) and the profit function in (6). For practical implementation, the clients can learn an accuracy model using their own data. The clients can also access information about the profit function via investigating the model trading market.

Vii Conclusion

In this paper, we study the data contribution game among clients in cross-silo FL. Due to competition and privacy concerns, the clients tend to use few data for model training. To address this issue, we design an incentive mechanism where the central server incentivizes clients’ data contribution via properly allocating the profit obtained from the global model. We analyze the game equilibria under three widely adopted mechanisms, i.e., LP, LOO, and SV. We show that the three mechanisms admit an identical symmetric equilibrium structure. However, at asymmetric equilibrium, LP outperforms both LOO and SV in encouraging clients’ data contribution and eliminating free riders.

References

  • [1] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo (2019) Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, pp. 634–643. Cited by: §I, §II, §IV-B.
  • [2] W. Chang and R. Tandon (2020) Communication efficient federated learning over multiple access channels. arXiv preprint arXiv:2001.08737. Cited by: §III-B4.
  • [3] D. Conway-Jones, T. Tuor, S. Wang, and K. K. Leung (2019) Demonstration of federated learning in a resource-constrained networked environment. In Proc. of IEEE SMARTCOMP, pp. 484–486. Cited by: §III-A.
  • [4] N. Ding, Z. Fang, and J. Huang (2020) Optimal contract design for efficient federated learning with multi-dimensional private information. IEEE Journal on Selected Areas in Communications 39 (1), pp. 186–200. Cited by: §III-A, §III-B1.
  • [5] Z. Fan, H. Fang, Z. Zhou, J. Pei, M. P. Friedlander, C. Liu, and Y. Zhang (2021) Improving fairness for data valuation in federated learning. arXiv preprint arXiv:2109.09046. Cited by: §I, §II, §IV-B.
  • [6] FeatureCloud External Links: Link Cited by: §I.
  • [7] Y. Fraboni, R. Vidal, and M. Lorenzi (2021) Free-rider attacks on model aggregation in federated learning. In

    International Conference on Artificial Intelligence and Statistics

    ,
    pp. 1846–1854. Cited by: §I, §II.
  • [8] A. Ghorbani and J. Zou (2019) Data shapley: equitable valuation of data for machine learning. In International Conference on Machine Learning, pp. 2242–2251. Cited by: §II.
  • [9] M. A. Heikkilä, A. Koskela, K. Shimizu, S. Kaski, and A. Honkela (2020) Differentially private cross-silo federated learning. arXiv preprint arXiv:2007.05553. Cited by: §II.
  • [10] H. Hu, Z. Salcic, L. Sun, G. Dobbie, and X. Zhang (2021) Source inference attacks in federated learning. arXiv preprint arXiv:2109.05659. Cited by: §III-B3.
  • [11] C. Huang, H. Yu, J. Huang, and R. Berry (2021) Strategic information revelation mechanism in crowdsourcing applications without verification. IEEE Transactions on Mobile Computing. Cited by: §IV-B.
  • [12] J. Huang, R. Talbi, Z. Zhao, S. Boucchenak, L. Y. Chen, and S. Roos (2020) An exploratory analysis on users’ contributions in federated learning. In International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), pp. 20–29. Cited by: §I, §II.
  • [13] Y. Huang, L. Chu, Z. Zhou, L. Wang, J. Liu, J. Pei, and Y. Zhang (2021) Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 7865–7873. Cited by: §II.
  • [14] Y. Jiao, P. Wang, D. Niyato, B. Lin, and D. I. Kim (2020) Toward an automated auction framework for wireless federated learning services market. IEEE Transactions on Mobile Computing. Cited by: §I, §III-B2.
  • [15] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D’Oliveira, H. Eichner, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, H. Qi, D. Ramage, R. Raskar, M. Raykova, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, and S. Zhao (2021) Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14 (1–2), pp. 1–210. External Links: Document, ISSN 1935-8237 Cited by: §I.
  • [16] M. Li, T. Zhang, Y. Chen, and A. J. Smola (2014) Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661–670. Cited by: item 2, §III-B1.
  • [17] Q. Li, B. He, and D. Song (2020) Practical one-shot federated learning for cross-silo setting. arXiv preprint arXiv:2010.01017 1 (3). Cited by: §II.
  • [18] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang (2019) On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189. Cited by: §III-A.
  • [19] G. Liao, X. Chen, and J. Huang (2020) Privacy policy in online social network with targeted advertising business. In Proc. of IEEE INFOCOM, pp. 934–943. Cited by: §III-B3.
  • [20] J. Lin, M. Du, and J. Liu (2019) Free-riders in federated learning: attacks and defenses. arXiv preprint arXiv:1911.12560. Cited by: §II.
  • [21] O. Marfoq, C. Xu, G. Neglia, and R. Vidal (2020) Throughput-optimal topology design for cross-silo federated learning. arXiv preprint arXiv:2010.12229. Cited by: §II.
  • [22] M. J. Osborne and A. Rubinstein (1994) A course in game theory. MIT press. Cited by: §IV-B, §IV-B.
  • [23] Owkin External Links: Link Cited by: §I.
  • [24] K. Pillutla, S. M. Kakade, and Z. Harchaoui (2019) Robust aggregation for federated learning. arXiv preprint arXiv:1912.13445. Cited by: §I, §III-A, §III-A.
  • [25] T. Song, Y. Tong, and S. Wei (2019) Profit allocation for federated learning. In 2019 IEEE International Conference on Big Data (Big Data), pp. 2577–2586. Cited by: §II.
  • [26] SwissReWebank External Links: Link Cited by: §I.
  • [27] M. Tang and V. W. Wong (2021) An incentive mechanism for cross-silo federated learning: a public goods perspective. In Proc. of IEEE INFOCOM, pp. 1–10. Cited by: §V-A.
  • [28] N. H. Tran, W. Bao, A. Zomaya, M. N. Nguyen, and C. S. Hong (2019) Federated learning over wireless networks: optimization model design and analysis. In INFOCOM, pp. 1387–1395. Cited by: §III-B4.
  • [29] S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou (2019) A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pp. 1–11. Cited by: §I.
  • [30] H. Yu, Z. Liu, Y. Liu, T. Chen, M. Cong, X. Weng, D. Niyato, and Q. Yang (2020) A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 393–399. Cited by: §I, §II.
  • [31] Y. Zhan, P. Li, Z. Qu, D. Zeng, and S. Guo (2020) A learning-based incentive mechanism for federated learning. IEEE Internet of Things Journal 7 (7), pp. 6360–6368. Cited by: §I, §II, §IV-B, §VI.
  • [32] N. Zhang, Q. Ma, and X. Chen (2021) Enabling long-term cooperation in cross-silo federated learning: a repeated game perspective. arXiv preprint arXiv:2106.11814. Cited by: §II, §III-A, §III-B1, §III-B1.
  • [33] X. Zhang, F. Li, Z. Zhang, Q. Li, C. Wang, and J. Wu (2020) Enabling execution assurance of federated learning at untrusted participants. In Proc. of IEEE INFOCOM, pp. 1877–1886. Cited by: §III-A.