Due to the explosive growth of smart IoT devices at the edge of the Internet, massive data collection through embedded sensors on mobile devices, e.g., crowdsensing, for machine learning has found a number of applications and gained tremendous popularity rapidly . However, the existing machine learning approaches rely on centralized storage of the training data. Consequently, they usually face a series of data security and privacy issues, e.g., data abuse and information leakage. A recent report from Ponemon Institute suggests an average cost of over $200 for per record of data breach . Such a high economic loss has hindered the adoption of data sharing among the different entities, and the machine learning that requiring centralized data storage is facing great challenges.
To overcome the limitations of traditional machine learning in the protection of data privacy, a novel paradigm has been proposed. In , the federated learning system was introduced to address the issue of data privacy. Therein, the mobile devices perform computation of model training locally on their training data according to the model released by the model owner. Such design enables mobile device to collaboratively learn a shared prediction model while keeping all the training data on the device . However, the independent and rational mobile devices need incentive to participate in federated learning. In practice, asking the mobile devices to work as sacrificial volunteers is not an economically viable and sustainable option. Moreover, in the federated learning paradigm, the direct communication between model owner and mobile devices is still required for transferring model updates, i,e., Fig. 1(a). In many scenarios, the direct communication may be unavailable because of limited transmission range and energy inefficiency because of high transmission power.
To address the incentive issues of the mobile devices, we adopt the service pricing scheme to motivate the mobile devices to participate in federated learning. Under the service pricing scheme, the machine learning is provided by the mobile devices as a service to the model owner. Then, the learning service, i.e., model update generation and trading as well as data collection, is performed in a decentralized manner. Additionally, to overcome the energy inefficiency in the model update transfer for the mobile devices, we resort to relay networking to ensure that the model updates are transferred in a cooperative manner. The mobile devices cooperatively form a relay network by providing relay service to each other and directly or indirectly connect to the access point of the model owner for model update transfer, e.g., Fig. 1(b). Note here that the mobile device that acts as the relay node will use the average operator to combine its own model update with its received model updates. Then, the file size of the model update is not affected, and hence providing relay service does not significantly affect the energy consumption of the mobile devices.
In this paper, we propose a novel framework of cooperative federated learning system. Our designed federated learning system involves two parties, i.e., the massive-scale mobile devices working as learning service providers, and the model owner handling the learning task dispatching (model releasing) and model updates collection. The mobile devices price their learning service by deciding on the price of one unit of their training data. In return, the model owner determines the size of training data for each mobile device. As such, under the service pricing scheme, the mobile devices optimize the prices of their data to motivate the model owner to determine a larger size of training data and hence maximize their profits. However, under the cooperative relay network design, the larger size of training data implies the lower probability of enjoying the relay service. As a result, the learning service pricing and cooperative relaying should be considered jointly. By using the pricing-based data rent and a self-organized relay network design shown in Fig1(b) for federated learning, the following key properties are guaranteed in our system:
The model update throughput of the model owner scales well such that the massive model update volume from the mobile devices is handled smoothly.
The rational and self-interested mobile devices noncooperatively decide on their own price of one unit of training data for individual profit optimization and cooperatively transfer their model updates.
Significantly reduces the congestion in the communication for both the model owner and the mobile devices.
The rest of the paper is organized as follows. Section II describes the system model. Section III presents the formulation of a Stackelberg game. Section IV analyzes the equilibrium of the proposed Stackelberg game. Section V presents the numerical performance evaluation. Section VI concludes the paper.
Ii System Description
|,||Node and the access point of the model owner.|
|The set of mobile devices.|
|The size of the model update.|
|The transmission rate of mobile device .|
We consider a cooperative federated learning system as shown in Fig. 1(b). Specifically, a model owner employs a set of mobile devices, e.g., mobile phones, to train a high-quality centralized model . The set of mobile devices is denoted by , e.g., mobile devices , , and in Fig. 1(b). Each mobile device uses a part of its data and performs computation on its data locally to generate the model update for training the model of the model owner. The model owner negotiates with the mobile devices about the size of their training data, i.e., . In return, each mobile device will receive the revenue from the model owner, where is the price for one unit of mobile device ’s training data. Intuitively, the learning accuracy of the model depends on the total size of all the mobile devices’ training data. Specifically, the learning accuracy of the model becomes higher as the total size of all the mobile devices’ training data increases. In this case, we incorporate the results in  to describe the relationship between the learning accuracy of the model and the total size of all the mobile devices’ training data. As a result, the utility of the model owner is defined as follows:
where is the function describing the relationship between the learning accuracy of the model and the total size of all the mobile devices’ training data . Note here that is an increasing concave function implying that the learning accuracy of the model keeps increasing as the size of training data increases while the marginal increasing speed of the learning accuracy of the model decreases .
The mobile devices can cooperate with each other for transferring their model updates to the access point of the model owner. Let denote the transmission power used by mobile device for transferring its model update to mobile device , where we define the transmission power matrix . is the element of the indicator matrix , i.e., , and defined as follows:
To provide the learning service, mobile device has the cost of due to the energy consumption incurred from the computation. Moreover, each mobile device has another cost incurred by the wireless transmission of its model update. Let be the cost that mobile device uses one unit of power for wireless transmission, the instantaneous cost for mobile device incurred by the transmission is . Under the assumption that the sizes of the model updates transferred by the mobile devices are the same, the energy consumption from the wireless transmission depends on the transmission rate of each mobile device, i.e., . We denote the size of the model update by . The time for transferring the model update is accordingly , and hence the energy consumption of mobile device incurred by the wireless transmission is . Furthermore, with the relay service among the mobile devices, each mobile device will have a revenue of due to its provided relay service while a cost of incurred by the use of the relay service. comes from the fact that mobile device uses the relay service and hence pay the fee if it does not directly transfer the model update to the model owner, i.e., . Finally, with the revenue of providing the learning service to the model owner, i.e., , the corresponding profit of mobile device is
To form the wireless relay networks among the mobile devices and hence achieve the energy efficient communication for the model update transfer, we have two sets of constraints. The first set of the constraints is for the routing. The second set of the constraints is for the model update arrival time at the relay point.
For the routing, we first have the constraint to ensure that every mobile device can connect to and transfer its model update to only one of other mobile devices or directly to the access point of the model owner. The constraint is expressed as follows:
Secondly, we have the constraint that at least one mobile device connects to the access point of the model owner and acts as one of the last-hop nodes. Otherwise, none of the mobile devices can transfer the model update to the model owner. The corresponding constraint is expressed as follows:
We then have the constraint that the model update of each mobile device can finally and affirmatively arrive at the model owner. That is, each mobile device can transfer its model update to the model owner after a limited number of mobile devices as its relay nodes, i.e.,
Regarding the constraint for the model update arrival time on the relay node, we have the constraint in (8). This constraint is to ensure that the model update of mobile device will arrive at mobile device before mobile device finishes the computing of its model update if mobile device wants to transfer its model update by choosing mobile device as its relay node. The time used by mobile device for providing the learning service can be divided into three periods. The first time period, denoted by , is for performing the computation on the training data according to the model. The model update is generated at the end of this time period. Suppose that is the processing rate of mobile device , is accordingly defined as . The second time period is for mobile device to provide relay service to other mobile devices. For each received model update from another mobile device, mobile device needs to spend the time of for combining the received model update with its own model update by using the average operator. The length of the second time period is therefore linearly related to the number of model updates received by mobile device , i.e., the number of mobile devices that use mobile device ’s relay service , and expressed as . The last time period is for mobile device to transfer its model update and expressed as , where is the transmission rate of mobile device defined in (9). To ensure that mobile device can successfully use the relay service from other mobile devices, the sum of three time periods at mobile device must be shorter than the first time period of its relay node, i.e.,
We assume that the mobile devices using the same relay node share the channel and hence generate mutual interference. Let denote the channel gain from mobile device to mobile device , be the distance between mobile devices and , and be the path loss coefficient of wireless communication. Then, we define a matrix , the element of which, i.e., , is . Accordingly, the propagation model for data transmission of mobile device , i.e., the transmission rate of mobile device , is given by
where is the noise, is the bandwidth of mobile device , , and . Note here that we use and to represent the
-th row vector and-th column vector of , respectively, and the usage of this notation is defined similarly in the rest of this paper.
Iii Stackelberg game Formulation
In this section, we model the interaction between the mobile devices and the model owner as a Stackelberg game. In the Stackelberg game, the model owner is the buyer as it uses the learning service provided by the mobile devices. Then, the mobile devices that are the service providers act as the sellers. The sellers typically make their decisions before the buyers. Following this case, the model owner inherently acts as the single follower in the lower level of the Stackelberg game while the mobile devices are the corresponding leaders. In the lower level of the game, i.e., the lower-level subgame, the model owner determines the size of training data for the mobile devices. In the upper level, i.e., the upper-level subgame, the mobile devices decide on the price for one unit of their training data. Moreover, since the mobile devices cooperatively send their model update to the model owner, each mobile device also needs to independently decide on its relay node as well as its transmission power. As a result, the Stackelberg game can be formally defined as follows:
Lower-level subgame: Given the fixed vector of the prices of one unit of training data , the lower-level subgame is defined by the a three-tuple , where
is the vector of the sizes of training data;
is the domain of definition for and an M-polyhedron, where is the upper bound of ;
is the utility function of the model owner defined in (1);
Upper-level subgame: After the model owner’s demand of data size is determined in the lower-level subgame, the mobile devices form a upper-level subgame defined by a six-tuple , where
is the set of the mobile devices;
is the matrix of the power for wireless transmission;
is the domain of definition for , where is the upper bound of ;
is the vector of prices for one unit of training data.
is the domain of definition for and an M-polyhedron, where is the upper bound of ;
is the vector of the profits for the mobile devices, where is the profit of mobile device defined in (3).
Based on the game formulation, we consider a Stackelberg equilibrium to be the solution for the model owner and the mobile devices.
Iv Equilibrium Analysis
By following the backward induction, we firstly use the first-order optimality condition to obtain the optimal solution to the lower-level subgame . The existence of the optimal solution to the lower-level subgame is proven to exist by showing the concavity of its utility function. This optimal solution is further proven to be unique by showing the negative definiteness of the Hessian matrix of the utility function of the lower-level subgame . Then, we substitute the NE of the lower-level subgame into the upper-level subgame and investigate the solution to the upper-level subgame by capitalizing on the exterior point method.
Iv-a Solution to Lower-level Subgame
To find an optimal solution for the lower-level subgame , we need to take the first derivative of the utility function of the model owner given in (1) with respect to as follows:
where . Without loss of generality and for the trackable analysis, we adopt Weibull model as suggested in , i.e., . Let , , we have , i.e., the best response of the model owner, as follows:
Due to the strict concavity of and the linearity of with respect to , is therefore concave with respect to and its concavity indicates the existence of the solution to the lower-level subgame . Moreover, with negative elements on the primary diagonal and zero elements on the off-diagonal, the Hessian matrix of is negative definite, and hence the solution to the lower-level subgame is unique.
Iv-B Solutions to Upper-level Subgame
After obtaining the optimal demand of the data size for the model owner, we investigate the upper-level subgame for the mobile devices. At the Nash Equilibrium (NE), no player can increase its profit by choosing a different strategy provided that the other players’ strategy is unchanged . We firstly substitute the optimal demand of the data size for the model owner given in (11) into the profit functions of the mobile devices given in (3) and have the new profit functions for the mobile devices as follows:
with the constraints (4)-(8). As the constraints (4)-(8) are nonlinear, we adopt the exterior point method . Then, we can rewritten the constrained profit function of mobile device , i.e., (12), into a unconstrained objective function as follows:
where is the penalty coefficient with huge positive value and
V Performance Evaluation
In this section, we present numerical studies to evaluate the performance of the cooperative federated learning system. For the ease of illustration, we consider mobile devices, i.e, , working as the learning entities. The bandwidth and channel gain are respectively and , , and the noise is . The size of the model update is assumed to be . The distances among the mobile devices as well as that between the mobile devices and the access point of the model owner, i.e.,
, follows a uniform distribution under the plane of
. The vector of the costs of consuming one unit of power for wireless communication is generated by using Gaussian distribution and shown as follows:. Similarly, the following parameters are generated by using Gaussian distribution. The vector of the costs of processing one unit of data is . We set the fee of relay service to be . The vector of processing rate is . The vector of the time of using average operator for the mobile devices is . The coefficient of defined in (1) is and .
V-a Numerical Result
|Mobile device No.||Routing|
For the convenience to observe the routing result for the cooperative model update transfer, We present it in Fig. 2 and Table II. As shown in Fig. 2, the mobile devices self-organize the wireless rely network for cooperative model update transfer. For example, mobile device uses mobile device as the relay node for transferring the model update. This can significantly reduce the energy consumption of wireless communication for mobile device . Moreover, both mobile devices and choose the same mobile device, i.e, mobile device , as their relay node. This will incur the mutual interference between mobile devices and and hence result in energy inefficient communication. However, compared with choosing other mobile devices as the relay node or directly transferring the model update to the cental cloud, it is more energy efficient to choose mobile device as the relay node for mobile devices and .
We next evaluate the prices of one unit of training data for the mobile devices. As shown in Fig. 3, we observe that the price of one unit of training data for mobile device is the highest one among the mobile device. The reason is that the model update from mobile device received by the model owner is the combination of the model update from two mobile devices, i.e., mobile devices and as shown in Fig. 2 and Table II. This means that the model update from mobile device contains much more valuable information than that generated by using the model update from only one mobile device, e.g., the model update of mobile devices , , and . In contrast, although the model update from mobile device is the combination of the model updates from the model updates of mobile devices , , and , the price of one unit of training data for mobile device is even less than that for mobile device . This is due to the data quality and the preference of the model owner.
We then investigate the sizes of the training data for the model owner. As shown in Fig. 4, the size of the training data from mobile device is the largest due to its lowest price as shown in Fig. 3. Along with the slow processing rate of mobile device , i.e., , such a large size of data of mobile device implies that mobile device will take much time for performing computation on its training data. Accordingly, it is likely that the model updates from other mobile devices can arrive at mobile device before the mobile device finishes is its computation. Thus, the mobile device can serve as a relay for many other devices. As we observe in Fig. 4, the sizes of data of mobile devices and , i.e., the neighboring mobile devices of mobile device as shown in Fig. 2, are much smaller than that of mobile device . As a result, mobile devices and choose mobile device as their relay node. This helps mobile devices and to avoid transferring their model updates directly to the access point, improving the energy efficiency.
In this paper, we have presented the Stackelberg game model to analyze the transmission strategy and training data pricing strategy of the self-organized mobile device as well as the learning service subscription of the model owner in the cooperative federated learning system. We have focused on the interactions among the mobile devices and considered the impact of the interference cost on the mobile devices’ profits. Moreover, we have investigated the impact of the size of the training data on both the model owner’s utility and the mobile devices’s profits. Specifically, we have established a model describing the impact of the mobile devices’ transmission strategies on their transmission rates and relay node. The model also describes the impact of the model owner’s learning service subscription strategy on the model owner’s utility. We have studied the optimal strategy of the model owner by using best response and the equilibrium strategies of the mobile devices by using the exterior point method. Our future work will extend to the study in the long-run among the model owner and the mobile devices.
-  S. Feng, W. Wang, D. Niyato, D. I. Kim and P. Wang, “Competitive data trading in wireless-powered internet of things (iot) crowdsensing systems with blockchain,” arXiv preprint arXiv:1808.10217, 2018.
-  D. W. Opderbeck, “Cybersecurity, data breaches, and the economic loss doctrine in the payment card industry,” Md. L. Rev., vol. 75, pp. 935, 2015.
-  google, “Federated learning: Collaborative machine learning without centralized training data,” https://ai.googleblog.com/2017/04/federated-learning-collaborative.html, Accessed April 6, 2017.
-  TensorFlow, “Machine learning and mobile: Deploying models on the edge,” https://blog.algorithmia.com/machine-learning-and-mobile-deploying-models-on-the-edge/, Accessed June 21, 2018.
-  J. Konecnỳ, H. B. McMahan, D. Ramage and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
-  B. Gu, F. Hu and H. Liu, “Modelling classification performance for large data sets,” in International Conference on Web-Age Information Management. Springer, 2001, pp. 317–328.
-  A. Gharaibeh, T. Reza, E. Santos-Neto, L. B. Costa, S. Sallinen and M. Ripeanu, “Efficient large-scale graph processing on hybrid cpu and gpu systems,” arXiv preprint arXiv:1312.3018, 2013.
M. J. Osborne et al.,
An introduction to game theory, vol. 3, Oxford university press New York, 2004.
-  X. Yang, “An exterior point method for computing points that satisfy second-order necessary conditions for a c1, 1 optimization problem,” Journal of Mathematical Analysis and Applications, vol. 187, no. 1, pp. 118–133, 1994.