Energy Efficient Federated Learning Over Wireless Communication Networks

11/06/2019 ∙ by Zhaohui Yang, et al. ∙ 13

In this paper, the problem of energy efficient transmission and computation resource allocation for federated learning (FL) over wireless communication networks is investigated. In the considered model, each user exploits limited local computational resources to train a local FL model with its collected data and, then, sends the trained FL model parameters to a base station (BS) which aggregates the local FL model and broadcasts it back to all of the users. Since FL involves an exchange of a learning model between users and the BS, both computation and communication latencies are determined by the learning accuracy level. Meanwhile, due to the limited energy budget of the wireless users, both local computation energy and transmission energy must be considered during the FL process. This joint learning and communication problem is formulated as an optimization problem whose goal is to minimize a weighted sum of the completion time of FL, local computation energy, and transmission energy of all users, that captures the tradeoff of latency and energy consumption for FL. To solve this problem, an iterative algorithm is proposed where, at every step, closed-form solutions for time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy are derived. For the special case that only minimizes the completion time, a bisection-based algorithm is proposed to obtain the optimal solution. Numerical results show that the proposed algorithms can reduce up to 25.6 compared to conventional FL methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

page 9

page 14

page 16

page 20

page 21

page 30

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In future wireless systems, due to privacy constraints and limited communication resources for data transmission, it is impractical for all wireless devices to transmit all of their collected data to a data center that can use the collected data to implement centralized machine learning algorithms for data analysis and inference

[2]. To this end, distributed learning frameworks are needed, to enable the wireless devices to collaboratively build a shared learning model with training their collected data locally [3, 4, 5, 6, 7, 8, 9]. One of the most promising distributed learning algorithms is the emerging federated learning (FL) framework that will be adopted in future Internet of Things (IoT) systems [10, 11, 12, 13, 14, 15, 16, 17, 18]. In FL, wireless devices can cooperatively execute a learning task by only uploading local learning model parameters to the base station (BS) instead of sharing the entirety of their training data [19]. To implement FL over wireless networks, the wireless devices must transmit their local training results over wireless links [20], which can affect the performance of FL due to limited wireless resources (such as time and bandwidth). In addition, the limited energy of wireless devices is a key challenge for deploying FL. Indeed, because of these resource constraints, it is necessary to optimize the energy efficiency for FL implementation.

Some of the challenges of FL over wireless networks have been studied in [21, 22, 23, 24, 25, 26, 27]. To minimize latency, a broadband analog aggregation multi-access scheme was designed in [21] for FL by exploiting the waveform-superposition property of a multi-access channel. An FL training minimization problem was investigated in [22] for cell-free massive multiple-input multiple-output (MIMO) systems. For FL with redundant data, an energy-aware user scheduling policy was proposed in [23] to maximize the average number of scheduled users. To improve the statistical learning performance for on-device distributed training, the authors in [24] developed a novel sparse and low-rank modeling approach. The work in [25] introduced an energy-efficient strategy for bandwidth allocation under learning performance constraints. However, the works in [21, 22, 23, 24, 25] focused on the delay/energy for wireless transmission without considering the delay/energy tradeoff between learning and transmission. Recently, the works in [26] and [27] considered both local learning and wireless transmission energy. In [26]

, we investigated the FL loss function minimization problem with taking into account packet errors over wireless links. However, this prior work ignored the computation delay of local FL model. The authors in

[27] considered the sum learning and transmission energy minimization problem for FL, for a case in which all users transmit learning results to the BS. However, the solution in [27] requires all users to upload their learning model synchronously. Meanwhile, the work in [27] did not provide any convergence analysis for FL.

The main contribution of this paper is a novel energy efficient transmission and computation resource allocation scheme for FL over wireless communication networks. Our key contributions include:

  • We study the performance of FL algorithm over wireless communication networks for a scenario in which each user locally computes its FL model parameters under a given learning accuracy and the BS broadcasts the aggregated FL model parameters to all users. For the considered FL algorithm, we first derive the convergence rate. Then, we obtain the unique models of delay and energy consumption for FL.

  • Considering the tradeoff between delay and total energy for local computation and wireless transmission, we formulate a joint transmission and computation optimization problem aiming to minimize a weighted sum of the completion time and the total energy consumption. To solve this problem, an iterative algorithm is proposed with low complexity. At each step of this algorithm, we derive new closed-form solutions for the time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy.

  • To minimize the FL completion time for delay sensitive scenarios, we theoretically show that the completion time is a convex function of the learning accuracy. Based on the theoretical finding, we propose a bisection-based algorithm to obtain the optimal solution.

  • Simulation results show that the proposed scheme that jointly considers transmission and computation optimization can achieve up to 25.6% delay and 37.6% energy reduction compared to the conventional FL methods.

The rest of this paper is organized as follows. The system model and problem formulation are described in Section II. Section III provides the resource allocation for weighted time and energy minimization. The special case with only minimizing the completion time is given in Section IV. Simulation results are analyzed in Section V. Conclusions are drawn in Section VI.

Ii System Model and Problem Formulation

Consider a cellular network that consists of one BS serving a set of users, as shown in Fig. 1. Each user has a local dataset with data samples. For each dataset ,

is an input vector of user

and is its corresponding output111For simplicity, we consider an FL algorithm with a single output. In future work, our approach will be extended to the case with multiple outputs..

Fig. 1: Illustration of the considered model for FL over wireless communication networks.

Ii-a FL Model

In this section, the considered FL algorithm that is implemented over wireless networks is introduced. Hereinafter, the FL model that is trained by each user’s dataset is called the local FL model, while the FL model that is generated by the BS using local FL model parameter inputs from all users is called the global FL model.

We define a vector to capture the parameters related to the global FL model. We introduce the loss function , that captures the FL performance over input vector and output . For different learning tasks, the loss function will be different. For example,

for linear regression and

for logistic regression. Since the dataset of user

is , the total loss function of user will be:

(1)

Note that function is the loss function of user with one data sample and function is the total loss function of user with the whole local dataset. In the following, is simplified by .

In order to deploy an FL algorithm, it is necessary to train the underlying model. Training is done in order to generate a unified FL model for all users without sharing any datasets. The FL training problem can be formulated as [19, 2, 13]:

(2)

where is the total data samples of all users.

To solve problem (2), we adopt the FL algorithm of [19], which is summarized in Algorithm 1.

1:  Initialize global regression vector and iteration number .
2:  repeat
3:     Each user computes and sends it to the BS.
4:     The BS computes
(3)
which is broadcast to all users.
5:     parallel for user
6:      Solve local FL problem (5) with a given learning accuracy and the solution is .
7:      Each user sends to the BS.
8:     end for
9:     The BS computes
(4)
and broadcasts the value to all users.
10:     Set .
11:  until the accuracy of problem (2) is obtained.
Algorithm 1 FL Algorithm

In Algorithm 1, we can see that, at every FL iteration, each user downloads the global FL model parameters from the BS for local computing, while the BS periodically gathers the local FL model parameters from all users and sends the updated global FL model parameters back to all users.

We define as the global FL parameter at a given iteration . In practice, each user computes the local FL problem:

(5)

by using the gradient method with a given accuracy. In problem (5), is a constant value. The solution in problem (5) represents the difference between the global FL parameter and local FL parameter for user , i.e., is the local FL parameter of user at iteration . Since it is hard to obtain the optimal solution of problem (5) using numerical methods, we obtain a feasible solution of problem (5) with some desired, target accuracy. The solution of problem (5) at iteration under a target accuracy is a point such that:

(6)

where is the optimal solution of problem (5).

In Algorithm 1, the iterative method involves a number of global iterations (i.e., the value of in Algorithm 1) to achieve a global accuracy for the global FL model. In other works, the solution of problem (2) with accuracy is a point such that

(7)

where is the actual optimal solution of problem (2).

To analyze the convergence rate of Algorithm 1, we make the following assumption on the loss function. Assume that is -Lipschitz continuous and -strongly convex, i.e.,

(8)

Under assumption (8), we provide the following theorem about convergence rate of Algorithm 1, where each user solves its local FL problem with a given accuracy.

Theorem 1

If we run Algorithm 1 with for

(9)

iterations with , we have .

Proof: See Appendix A.

From Theorem 1, we observe that the number of global iterations increases with the local accuracy. Theorem 1 can be used to derive the total time for performing the entire FL algorithm and transmission energy of all users. From Theorem 1, we can also see that the FL performance depends on parameters , , , and . Note that the prior work in [28, Eq. (9)] only studied the number of iterations needed for FL convergence under the special case in which . Theorem 1 provides a general convergence rate for FL with an arbitrary .

Ii-B Computation and Transmission Model

Fig. 2: The FL procedure between users and the BS.

The FL procedure between the users and their serving BS is shown in Fig. 2. From this figure, the FL procedure contains three steps at each iteration: Local computation at each user (using several local iterations), local FL parameter transmission for each user, and result aggregation and broadcast at the BS. The local computation step is essentially the phase during which each user calculates its local FL parameters by using its local data set and the received global FL parameters.

Ii-B1 Local Computation

We solve the local learning problem (5) by using the gradient method. In particular, the gradient procedure in the -th iteration is given by:

(10)

where is the step size, is the value of at the -th local iteration with given vector , and is the gradient of function at point . We set the initial solution .

Next, in Lemma 1, we derive a lower bound on the number of local iterations needed to achieve a local accuracy in (6).

Lemma 1

Let . If we set step and run the gradient method

(11)

iterations at each user, we can solve local FL problem (5) with an accuracy .

Proof: See Appendix B.

The lower bounded derived in (11) reflects the growing trend for the number of local iterations with respect to accuracy . In the following, we use this lower bound to approximate the number of iterations needed for local computations by each user. Let be the computation capacity of user , which is measured by the number of CPU cycles per second. The computation time at user needed for data processing is:

(12)

where (cycles/bit) is the number of CPU cycles required for computing one sample data at user , is a lower bound for the number of local iterations for each user as given by (11), and . The approximated energy consumption of user for calculating the gradients of the local loss function is:

(13)

where is the effective switched capacitance that depends on the chip architecture [29].

Ii-B2 Wireless Transmission

After local computation, all users upload their local FL parameters to the BS via frequency domain multiple access (FDMA). FDMA is preferred over TDMA because TDMA requires synchronizations, while FDMA can be implemented in an asynchronous manner.

The achievable rate of user can be given by:

(14)

where is the bandwidth allocated to user , is the transmit power of user , is the channel gain between user and the BS, and is the power spectral density of the Gaussian noise. Due to limited bandwidth of the system, we have: , where is the total bandwidth.

In this step, user needs to upload the local FL parameters to the BS. Since the dimensions of the vector are fixed for all users, the data size that each user needs to upload is constant, and can be denoted by . To upload data of size within transmission time , we must have: . To transmit data of size within a time duration , the wireless transmit energy of user will be: .

Ii-B3 Information Broadcast

In this step, the BS aggregates the global prediction model parameters. The BS broadcasts the global prediction model parameters to all users in the downlink. Due to the high transmit power at the BS and the high bandwidth that can be used for data broadcasting, the downlink time is neglected compared to the uplink data transmission time. It can be observed that the local data is not accessed by the BS, so as to protect the privacy of users, as is required by FL.

According to the above FL model, the energy consumption of each user includes both local computing energy and wireless transmission energy . Given that the number of global iterations is in (9), the total energy consumption of all users that participate in FL will be:

(15)
Fig. 3: Asynchronous implementation for the FL algorithm.

Hereinafter, the total time needed for completing the execution of the FL algorithm is called completion time. The completion time of each user includes the local computation time and transmission time, as shown in Fig. 3. Based on (9) and (12), the completion time of user will be:

(16)

Let be the completion time for training the entire FL algorithm, which must satisfy:

(17)

According to (15) and (17), there is a tradeoff between completion time and total energy consumption . For a small completion time , each user may need to use a high computation capacity and a high transmission power , which leads to a high total energy .

Ii-C Problem Formulation

Our goal is to minimize the weighted sum of completion time and total energy consumption of all users. This energy efficient optimization problem can be posed as follows:

(18)
s.t. (18a)
(18b)
(18c)
(18d)
(18e)
(18f)
(18g)

where , , , , and are respectively the maximum local computation capacity and maximum transmit power of user , and is a constant weight parameter. In the objective function (18), is used to characterize the tradeoff between the completion time and the total energy consumption . Constraint (18a) indicates that the execution time of the local tasks and transmission time for all users should not exceed the completion time for the whole FL algorithm. The data transmission constraint is given by (18b), while the bandwidth constraint is given by (18c). Constraints (18d) and (18e) respectively represent the maximum local computation capacity and transmit power limits of all users. The local accuracy constraint is given by (18f).

Iii Resource Allocation for Weighted Time and Energy Minimization

For the general weighted time and energy minimization problem (18), it is challenging to obtain the globally optimal solution due to nonconvexity. To overcome this challenge, an iterative algorithm with low complexity is proposed in this section.

Iii-a Iterative Algorithm

The proposed iterative algorithm mainly contains two steps in each iteration. To optimize in problem (18), we first optimize with fixed , then is updated based on the obtained in the previous step. The advantage of this iterative algorithm lies in that we can obtain the optimal solution of or in each step.

In the first step, given , problem (18) becomes:

(19)
s.t. (19a)
(19b)
(19c)

where

(20)

The optimal solution of (19) can be derived using the following theorem.

Theorem 2

The optimal solution of problem (19) satisfies:

(21)

and is the optimal solution to:

(22)
s.t. (22a)

where

(23)

, , , and are defined in (C.2).

Proof: See Appendix C.

Theorem 2 shows that it is optimal to transmit with the minimum time for each user. Based on this finding, problem (19) is equivalent to the problem (22) with only one variable. Obviously, the objective function (22) has a fractional form, which is generally hard to solve. By using the parametric approach in [30], we consider the following problem,

(24)

It has been proved [30] that solving (22) is equivalent to finding the root of the nonlinear function . Since (24) with fixed is convex, the optimal solution can be obtained by setting the first-order derivative to zero, yielding the optimal solution: . Thus, problem (22) can be solved by using the Dinkelbach method in [30] (shown as Algorithm 2).

1:  Initialize , iteration number , and set the accuracy .
2:  repeat
3:     Calculate the optimal of problem (24).
4:     Update
5:     Set .
6:  until .
Algorithm 2 The Dinkelbach Method

In the second step, given calculated in the first step, problem (18) can be simplified as:

(25)
s.t. (25a)
(25b)
(25c)
(25d)
(25e)

Since both objective function and constraints can be decoupled, problem (25) can be decoupled into two subproblems:

(26)
s.t. (26a)
(26b)

and

(27)
s.t. (27a)
(27b)
(27c)

We can solve (26) using the following theorem.

Theorem 3

The optimal solution of problem (26) satisfies:

(28)

and

(29)

where

(30)
(31)

and is the unique solution to

(32)

Proof: See Appendix D.

Theorem 3 provides the optimal solution of problem (26) in closed-form, which greatly simplifies the complexity of solving (26). We solve problem (27) using the following theorem.

Theorem 4

The optimal solution of problem (27) satisfies:

(33)

and

(34)

where

(35)

is the solution to

(36)

and satisfies

(37)

Proof: See Appendix E.

1:  Initialize a feasible solution of problem (18) and set .
2:  repeat
3:     With given , obtain the optimal of problem (19).
4:     With given , obtain the optimal of problem (25).
5:     Set .
6:  until objective value (18a) converges
Algorithm 3 : Iterative Algorithm

By iteratively solving problem (19) and problem (25), the algorithm that solves problem (18) is given in Algorithm 3. Since the optimal solution of problem (19) or (25) is obtained in each step, the objective value of problem (18) is nonincreasing in each step. Moreover, the objective value of problem (18) is lower bounded by zero. Thus, Algorithm 3 always converges to a local optimal solution.

Iii-B Complexity Analysis

To solve the general energy efficient resource allocation problem (18) by using Algorithm 3, the major complexity in each step lies in solving problem (19) and problem (25). To solve problem (19), the major complexity lies in obtaining the optimal according to Theorem 2, which involves complexity with accuracy by using the Dinkelbach method. To solve problem (25), two subproblems (26) and (27) need to be optimized. For subproblem (26), the complexity is , where is the accuracy of solving (32) with the bisection method. For subproblem (27), the complexity is , where and are respectively the accuracy of solving (35) and (36). As a result, the total complexity of the proposed Algorithm 3 is , where is the number of iterations for iteratively optimizing and , and .

The conventional successive convex approximation (SCA) method can be used to solve problem (18). The complexity of SCA method is [31, Pages 487, 569], where is the total number of iterations for SCA method. Compared to SCA, the proposed Algorithm 3 grows linearly with the number of users .

It should be noted that Algorithm 3 is done at the BS side before executing the FL scheme in Algorithm 1. To implement Algorithm 3, the BS needs to gather the information of , , , , , and , which can be uploaded by all users before the FL process. Due to small data size, the transmission delay of these information can be neglected. The BS broadcasts the obtained solution to all users. Since the BS has high computation capacity, the latency of implementing Algorithm 3 at the BS will not affect the latency of the FL process.

Iv Resource Allocation for FL Completion Time Minimization

In this section, we consider the special case of delay sensitive scenarios, where the completion time of the FL algorithm is more important than the energy consumption. Although the completion time minimization problem (18) with is still nonconvex due to constraints (18a)-(18b), we show that the globally optimal solution can be obtained by using the bisection method.

Iv-a Optimal Resource Allocation

We define as the optimal solution of problem (18) with .

Lemma 2

Problem (18) with and does not have a feasible solution (i.e., it is infeasible), while problem (18) with and always has a feasible solution (i.e., it is feasible).

Proof: See Appendix F.

According to Lemma 2, we can use the bisection method to obtain the optimal solution of problem (18) with .

With a fixed , we still need to check whether there exists a feasible solution satisfying constraints (18a)-(18g). From constraints (18a) and (18c), we can see that it is always efficient to utilize the maximum computation capacity, i.e., . From (18b) and (18d), we can see that minimizing the completion time occurs when . Substituting the maximum computation capacity and maximum transmission power into (18), the completion time minimization problem becomes:

(38)
s.t. (38a)
(38b)
(38c)
(38d)
(38e)

Next, we provide the sufficient and necessary condition for the feasibility of set (38a)-(38e).

Lemma 3

With a fixed , set (38a)-(38e) is nonempty if an only if

(39)

where

(40)

and

(41)

Proof: See Appendix G.

To effectively solve (39) in Lemma 3, we provide the following lemma.

Lemma 4

In (40), is a convex function.

Proof: See Appendix H.

Lemma 4 implies that the optimization problem in (39) is a convex problem, which can be effectively solved. By finding the optimal solution of (39), the sufficient and necessary condition for the feasibility of set (38a)-(38e) can be simplified using the following theorem.

Theorem 5

Set (38a)-(38e) is nonempty if and only if

(42)

where is the unique solution to .

Theorem 5 directly follows from Lemmas 3 and 4. Due to the convexity of function , is an increasing function of . As a result, the unique solution of to can be effectively solved via the bisection method. Based on Theorem 5, the algorithm for obtaining the minimal completion time is summarized in Algorithm 4. Theorem 5 shows that the optimal FL accuracy level meets the first-order condition , i.e., the optimal should not be too small or too large for FL. This is because, for small , the local computation time (number of iterations) becomes high as shown in Lemma 1. For large , the transmission time is long due to the fact that a large number of global iterations is required as shown in Theorem 1.

1:  Initialize , , and the tolerance .
2:  repeat
3:     Set .
4:     Check the feasibility condition (42).
5:     If set (38a)-(38e) has a feasible solution, set . Otherwise, set .
6:  until .
Algorithm 4 Completion Time Minimization

Iv-B Complexity Analysis

The major complexity of Algorithm 4 in each iteration lies in checking the feasibility condition (42). To check the inequality in (42), the optimal needs to be obtained by using the bisection method, which involves the complexity of with accuracy . As a result, the total complexity of Algorithm 4 is , where is the accuracy of the bisection method used in the outer layer. The complexity of Algorithm 4 is low since grows linearly with the total number of users.

Similar to Algorithm 3 in Section III, Algorithm 4 is done at the BS side before executing the FL scheme in Algorithm 1, which will not affect the latency of the FL process.

V Numerical Results

For our simulations, we deploy users uniformly in a square area of size m  m with the BS located at its center. The path loss model is (

is in km) and the standard deviation of shadow fading is

dB. In addition, the noise power spectral density is dBm/Hz. The user training data of each user

follows uniform distribution

Mbits and parameter is uniformly distributed in cycles/bit. The effective switched capacitance in local computation is . In Algorithm 1, we set , , , , and . Unless specified otherwise, we choose an equal maximum transmit power dBm, an equal maximum computation capacity GHz, a transmit data size kbits, weight parameter , and a bandwidth MHz. All statistical results are averaged over 1000 independent runs.

We compare the proposed FL scheme with the FL FDMA scheme with equal bandwidth (labelled as ‘EB-FDMA’), the FL FDMA scheme with fixed local accuracy (labelled as ‘FE-FDMA’), and the FL TDMA scheme in [27] (labelled as ‘TDMA’).

Fig. 4: Completion time versus maximum transmit power of each user with .

V-a Completion Time Minimization

Fig. 4 shows how the completion time changes as the maximum transmit power of each user varies. We can see that the completion time of all schemes decreases with the maximum transmission power of each user. This is because a large maximum transmit power can decrease the transmission time between users and the BS. We can clearly see that the proposed FL scheme achieves the best performance among all schemes. This is because the proposed approach jointly optimizes bandwidth and local accuracy , while the bandwidth is fixed in EB-FDMA and is not optimized in FE-FDMA. Compared to TDMA, the proposed approach can reduce the delay by up to 25.6% due to the following two reasons. First, each user can directly transmit result data to the BS after local computation in FDMA, while the wireless transmission should be performed after the local computation for all users in TDMA, which needs a longer time compared to FDMA. Second, the noise power for users in FDMA is lower than in TDMA since each user is allocated to part of the bandwidth and each user occupies the whole bandwidth in TDMA, which indicates the transmission time in TDMA is longer than in FDMA.

Fig. 5: Completion time versus transmit data size of each user with .

The completion time versus transmit data size of each user is depicted in Fig. 5. Clearly, the completion time monotonically increases with transmit data size. This is because the transmission time increases with the increase of the transmit data size, which consequently increases the completion time. It can be shown that the increase speed of completion time for TDMA is faster than that for FDMA. This is because FDMA is more spectrum efficient than TDMA.

Fig. 6: The total bandwidth in (39) versus the accuracy .

Fig. 6 shows the value in (39) versus . From this figure, it is found that is always a convex function, which verifies the theoretical findings in Lemma 4. It is also found that the optimal decreases with the increase of transmit data size. This is because small leads to small number of global iterations, which can decrease the transmission time especially for large transmit data size.

V-B Weighted Completion Time and Total Energy Minimization

Fig. 7: Weighted completion time and total energy versus maximum transmit power of each user.

Fig. 7 shows the weighted completion time and total energy as function of the maximum transmit power of each user. In this figure, the EXH-FDMA scheme is an exhaustive search method that can find a near optimal solution of problem (18), which refers to the proposed iterative algorithm with 1000 initial starting points. There are 1000 solutions obtained by using EXH-FDMA, and the solution with the best objective value is treated as the near optimal solution. From this figure, we can observe that the weighted completion time and total energy decreases with the maximum transmit power of each user. Fig. 7 also shows that the proposed FL scheme outperforms the EB-FDMA, FE-FDMA, and TDMA schemes. Moreover, the EXH-FDMA scheme achieves almost the same performance as the proposed FL scheme, which shows that the proposed approach achieves the optimum solution.

Fig. 8: Weighted completion time and total energy versus transmit data size of each user.

In Fig. 8, we show the weighted completion time and total energy versus transmit data size of each user. We can clearly see that the weighted completion time and total energy increases with the data size for all schemes since more data needs to be transmitted and more transmit energy must be used by the users for wireless transmission. Moreover, the slope of increase of the weighted completion time and total energy versus the data size of the proposed scheme is slower than that of the TDMA scheme.

Fig. 9: Weighted completion time and total energy versus maximum computation capacity of each user.

In Fig. 9, we present the weighted completion time and total energy versus maximum computation capacity of each user. Fig. 9 shows that the weighted completion time and total energy of all schemes decreases with maximum computation capacity of each user. This is because higher computation capacity decreases the local computation time, yielding a lower completion time. From Fig. 9, we can also see that the weighted completion time and total energy remains stable for high maximum computation capacity of each user. This is due to the fact that the local computation time is decreased with the increase of computation capacity, while the local computation energy increases when the computation capacity increases. As a result, to minimize the weighted completion time and total energy, it is optimal to choose a proper computation capacity for each user, which should not be too low nor too high. Moreover, from Fig. 9, we can see that the performance of the proposed FL scheme outperforms the conventional TDMA scheme. This is because users in FDMA can simultaneously transmit data.

Fig. 10: Total energy versus completion time.

Fig. 10 shows the tradeoff between total energy consumption and completion time. This figure is obtained by changing the values of parameter . We can see that FDMA outperforms TDMA in terms of total energy consumption especially for low completion time. This is because FDMA enables users to simultaneously transmit data to the BS and the transmission time in FDMA is larger than that in TDMA, which results in energy saving compared to TDMA. In particular, with given the same completion time, the proposed FL can reduce energy of up to 33.3%, 50.2%, and 37.6% compared to EB-FDMA, FE-FDMA, and TDMA, respectively.

Vi Conclusions

In this paper, we have investigated the problem of energy efficient transmission and computation resource allocation of FL over wireless communication networks. We have derived the time and energy consumption models for FL based on the convergence rate. With these models, we have formulated a joint learning and communication problem so as to minimize a linear combination of the completion time and the total energy. To solve this problem, we have proposed an iterative algorithm with low complexity, for which, at each iteration, we have derived closed-form solutions for transmission and computation resources. For the special case with only minimizing completion time, we have obtained the optimal solution by using the bisection method. Numerical results have shown that the proposed scheme outperforms conventional schemes in terms of completion time and total energy consumption, especially for small maximal transmission power and large transmit data size.

Appendix A Proof of Theorem 1

Before proving Theorem 1, the following lemma is provided.

Lemma 5

Under the assumption of given in (8), the following conditions hold:

(A.1)

and

(A.2)

Proof: From the definition of second-order derivative, there always exists a such that

(A.3)

Combining (8) and (A.3) yields (5).

For the optimal solution of , we always have . Combining (2) and (5), we also have , which indicates that

(A.4)
(A.5)

As a result, we have: