I Introduction and Motivation
Nowadays, energy efficiency (EE) for wireless communications is becoming a main economical and societal challenge [1]. EE maximization is a fractional programming problem which is typically solved by Dinkelbach’s algorithm [2, 3].
However, most of the works on EE are singlefraction problems. In the literature, solving a sumfraction problem is far more difficult than the singlefraction one. For multiplefraction problems, some specific forms (e.g., the maxmin problem) were studied in [4]. A sumfraction problem is shown to be NPhard [5]. The methods for finding its global optimum are quite time demanding (e.g., using branchandbound search [6, 7, 8]).
To find stationarypoint solutions of the sum EE maximization problem, successive convex approximation methods were used in paper [9] and a Lagrangian update approach was used in paper [10]. In paper [11]
, the authors proposed a quadratic transform to reformulate the sumfraction problem into a biconcave one. This method decouples the numerators and denominators by introducing only one variable vector. The resulted expression with seperate numerators and denominators is always more tractable for further analysis.
Inspired by the method in [11], in this paper, we propose another transform to deal with the sumfraction minimization (SFMin) problem. As in [12] and [13], one may aim to minimize the sum of inverse EE for more tractable expressions or analysis. Our considered problem in this paper is to minimize the sum of inverse EE for a multiple base station (BS) system.
In fact, the sumofinverse minimization (SIMin) leads to more fairness than the sum maximization (SMax). For better understanding this, let us take the example of maximizing the sum of two variables, as
(1) 
and minimizing their sumofinverse as
(2) 
On the one hand, it is always true that which can be interpreted as the fact that the inverse of the mean of inverses is a lower bound of the mean. The bound is tight when .
On the other hand, denote the solutions of the two above problems as and respectively. We have and After some manipulations, we have Without loss of generality, assuming and , we have which implies the minimization problem obtains more fairness. Note that it does not mean that , which is full fairness. Thus, it achieves a tradeoff between fairness and overall performance.
For the scenarios where the number of terms is larger than 2, it is no longer true that SIMin always has a better fairness. However, with a small number of terms (e.g. less than
), it is true with a very high probability. We will illustrate this later in the numerical results.
In [11], a quadratic transform is used to solve the sumfraction maximization problem. Although the difference between this work and our work is only maximization and minimization, the proposed transforms are quite different.
The contributions of this paper are as follows:

A sum energyperrate minimization problem is studied, which, to the best of the authors’ knowledge, has never been investigated in the literature. This problem has a better tradeoff between energy efficiency and fairness concern, which is a major difference from the sum rateperenergy maximization problem. A particular advantage is that no user is inactive by considering this problem.

A novel method for solving the SFMin problem is proposed. The method decouples the numerators and the denominators, which makes it possible to optimize the numerator part and denominator part separately by using the alternating direction method of multipliers (ADMM) method. This is a general framework which can be used in other practical problems.

A closedform solution is found by the Karush–Kuhn–Tucker (KKT) conditions, which gives more insight on the solution. The closedform solution is due to the method we proposed, which decouples the numerators and the denominators.
Ii General models
In this section, we begin by introducing a framework with a general optimization problem, which will be used later in a particular system model in this paper.
Let us consider an SFMin problem expressed as:
(3) 
where is the number of terms, is the variable vector whose domain is . and are functions of , always with positive values.
This SFMin problem cannot be solved by conventional Dinkelbach’s algorithm which is often used in fractional optimization. We propose a fraction transform to solve this problem. We name it ’fraction transform’ because there exist fraction terms. As can be seen later in this paper, this method enables to use the ADMM to implement the optimization distributedly and obtain a closedform solution for each subproblem.
In the following theorem, we show that it has an equivalent problem, that is,
(4) 
where is a newly introduced vector.
Theorem 1.
The solution of the minimization problem
(5) 
where and are positive, is the same as
(6) 
As an intuition, vector acts as the variable in in Dinkelbach’s algorithm, where maximization of is assumed, to change the priorities of numerators and denominators.
Even though the numerators and the denominators are decoupled, if the problem in (4) is not convex, it is still difficult to solve. In the following theorem, the convexity of the problem in (4) under some condition is proved.
Theorem 2.
If is concave and is convex, then the problem in (4) is convex for given .
Proof.
We will prove the convexity by proving its Hessian matrix is positive semidefinite [14].
The Hessian matrix of is
(9) 
Because is positive and is negative semidefinite, is positive semidefinite.
The Hessian matrix of is
(10) 
Because is positive and is positive semidefinite, is positive semidefinite.
Therefore, the objective funtion in (4) for given is convex because its Hessian matrix is positive semidefinite. ∎
From the analysis above, the problem can be solved in an alternating manner. The following convex problem is solved for a given :
(11) 
and then
(12) 
is updated.
This proposed fraction transform enables the ADMM method for solving (11), for example, as
(13)  
(14) 
Since the ADMM method is problemspecific, we leave the detailed analysis for the considered system model in the following sections.
Iii System model
Assume a multicell downlink scenario where the network has BSs and users as shown in Fig. 1. Each BS serves one user. All BSs share the same band, therefore introducing interference at the user side. The power gain from the th BS to the th user is denoted as .
In this system, is interpreted as the rate for user , is the power consumption of BS , and is the vector of transmit power of all BSs. To avoid confusion, we will replace by in the following.
Denoting the th element in as and the noise power as , the rate for user is expressed as
(15) 
and the power consumption for BS is expressed as
(16) 
where is the inverse of amplifier efficiency and is the circuit power of BS . From these expressions, we know that is linear; however, is not concave.
The sum inverse EE (SIEE) minimization problem can be formulated as
(17) 
where is the domain of , which is the transmit power constraint.
Iv The solution of the minimization problem
In this section, the solution of the proposed method is studied, which is divided into three steps. First, the problem is reformulated to deal with the nonconcavity of the rate functions. Second, the reformulated problem is solved by the ADMM method. The optimization can be implemented distributedly and different parts of the problem can be solved individually. Third, the closedform solutions are obtained thanks to the convexity of the reformulated problem and the ADMM method.
Iva Problem reformulation
From Theorem 1, we have the following equivalent problem:
(18) 
To tackle with the nonconcavity of , we introduce the following corollary, which is a direct result of Corollary 2 of paper [11]:
Corollary 1.
If is decreasing, then
(19) 
is equivalent with
(20) 
Similarly with the update of , can be updated by .
Therefore, to minimize is equivalent with
(21) 
where
(22) 
which is biconcave w.r.t and . This means is biconvex due to Theorem 2. Therefore, the following problem is a multiconvex problem:
IvB ADMMbased algorithm
The updates of and are straightforward. Therefore, we focus on the update of in the following.
For given , the problem is
(24) 
Observing that is only a function of and each has its own power constraint, it reminds us to decouple the terms of and the terms of to optimize in a distributed manner. To this end, we use the ADMM method as stated in the following.
IvC Closedform solutions
The Lagrangian of (26) can be written as
(29) 
where is the power constraint for BS . The KKT condition is
(30) 
which gives a closedform solution as
(31) 
Thanks to the ADMM method, the problem in (27) is now a constraintfree problem, as all constraints are on , not on . Because all are coupled, the optimization can only be implemented in a centralized manner. This unconstrained convex minimization can be solved by finding the stationary point, where the derivative w.r.t. in (27) is
(32) 
Newton’s method for system of equations can solve the equations, where the th equation is , which is the opposite of the lefthand side of (32) [14]. Note that the formula to update the solution is
(33) 
where is and is the Jacobian matrix, whose th entry is . To calculate the Jacobian matrix, some further manipulations and calculations are needed. We have
(34) 
where .
By defining
(35) 
we have
(36) 
If , then
(37) 
Thus, by using Newton’s method, the solution for (27) is found. The closedform expression for each iteration in Newton’s method has been obtained.
V Algorithms
In this section, based on the analysis above, we propose and summarize the alternate convex search to solve the problem in Algorithm 1. We begin the algorithm by initializing the newly introduced and for reformulations, and and for the ADMM method. Convergence is guaranteed since the problem is a multiconvex problem.
Vi Numerical and simulation results
In this section, we will illustrate the theoretical results by means of numerical results and simulation results.
The numerical results in Fig. 2 show that the proposed fraction transform has convergence speed similar to that of Dinkelbach’s algorithm for a fractional programming. In particular, we minimize using Dinkelbach algorithm and the proposed algorithm respectively. Both algorithms obtain roughly the optimal solution within five iterations.
The numerical results in Fig. 3, Fig. 4, and Fig. 5 show the fairness comparison between SIMin and SMax. Three criteria of fairness are considered: Jain’s fairness, G’s fairness, and Bossaer’s fairness. Each term in the sum maximization problem (like and in (1
)) is chosen from a random variable uniform between
and a maximum value equal to respectively , , in different figures. The percentages that SIMin have better fairness are plotted versus the number of fractional terms. ’A better fairness’ means a larger value of a certain fairness criterion. It is observed that, for a small range of , SIMin is better than Jain’s fairness and G’s fairness with high probability. With Bossaer’s fairness, it is better from 2 terms to 12 terms. As the dynamic range increases, the possibility that SIMin has a better fairness decreases. However, it can be observed that, when the number of terms is less than around 15, SIMin is better than the SMax for all ranges. Please note that a range of , which, under the context of EE, means one user’s EE is times the other one’s, is already quite large. One can refer to Fig. 9 and Fig. 10 as examples, which show that the highest EE is around times the lowest one. Therefore, we can conclude that for two terms, it is mathematically proven that SIMin is always better than SMax. For less than 15 terms, these numerical results show that the fairness of SIMin is most of the time better than the one of SMax.Next we illustrate our simulation results based on the system model. The system parameters are set as follows: the bandwidth for each subcarrier is set to KHz, mW. The power constraints are mW. We also select the following values for the channel modeling: , dB, where dB is the gain factor at d = 1m and dB [13]. The noise power spectral density is set to dBm/Hz, the noise figure to dB/Hz and the inverse of amplifier efficiency is chosen to be .
The most complex procedure is Newton’s method to solve (27). Fig. 6 shows the comparison of convergence speed of Newton’s method for different . It is observed that the algorithm converges within few iterations. The convergence for large values of is also sufficiently fast.
The efficiency of ADMM method also needs to be validated.
The primal residual should converge to a small value. This is illustrated in Fig. 7. We observe that, for various numbers of BSs, the primal residual is much smaller than , meaning and are close enough in their respective subproblems.
The value of the original objective function should be sufficiently larger than the additional term introduced for the convergence of and . This is validated in Fig. 8. We observe that, for various numbers of BSs, the value of the original objective function is more than 100 times larger than the additional term.
In Fig. 9 and Fig. 10, we compare the proposed optimization with rate maximization for 2 and 3 users respectively. The performance improvement in terms of SIEE from rate maximization to SIEE minimization is significant. It is assumed that user 1 has a weak channel and user 2 and 3 have stronger channels. As shown in the figure, this improvement comes mainly from the user with a worse channel (user 1 in these figures). This confirms the fairness improvement by the SIEE minimization, which reduces the difference of the values of IEE among users. This improvement is achieved by reducing the transmit power of users with better channels, which is an essential observation for interference channels: reducing transmit power of users with good channels does not influence much of its own EE, but improves the EE of users with weak channels.
Vii Conclusion
In this paper, a framework of solving SFMin problems is proposed and a SIEE minimization problem is solved for multiple BS systems. Two new vector variables are introduced to reformulate the original problem into a multiconvex problem. The ADMM is used to further simplify the problem to obtain closedform solutions. Numerical results confirm the fairness improvement of SIMin compared with SMax. Simulation results show that the algorithm convergences fast and the ADMM method is efficient. The EE performance outperforms the conventional rate maximization.
Acknowledgment
This work was supported by FNRS (Fonds National de la recherche scientifique) under EOS project Number 30452698. The authors would like to thank UCL for funding the ARC SWIPT project.
References
 [1] G. Li, Z. Xu, C. Xiong, C. Yang, S. Zhang, Y. Chen, and S. Xu, “Energy efficient wireless communications: tutorial, survey, and open issues,” IEEE Wireless Commun. Mag., vol. 18, no. 6, pp. 2835, Dec. 2011.
 [2] A. Zappone and E. Jorswieck, “Energy efficiency in wireless networks via fractional programming theory,” Foundations Trends Commun. Inf. Theory, vol. 11, no. 3, pp. 185–396, Jun. 2015.
 [3] C. Isheden, Z. Chong, E. Jorswieck, and G. Fettweis,“Framework for linklevel energy efficiency optimization with informed transmitter,” IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2946–2957, Aug. 2012.
 [4] J.P. Crouzeix, “Algorithms for generalized fractional programming,” Mathematical Programming, vol. 52, no. 1, pp. 191–207, May 1991.
 [5] R. W. Freund and F. Jarre, “Solving the sumofratios problem by an interiorpoint method,” J. Global Optimization, vol. 19, no. 1, pp. 83– 102, 2001.
 [6] H. P. Benson, “Solving sum of ratios fractional programs via concave minimization,” J. Optimization Theory Applicat., vol. 135, no. 1, pp. 1–17, Jun. 2007.
 [7] T. Kuno, “A branchandbound algorithm for maximizing the sum of several linear ratios,” J. Global Optimization, vol. 22, pp. 155–174, 2002.
 [8] N. T. H. Phuong and H. Tuy, “A unified monotonic approach to generalized linear fractional programming,” J. Global Optimization, vol. 22, pp. 229–259, 2003.
 [9] O. Tervo, L. Tran and M. Juntti, “Decentralized coordinated beamforming for weighted sum energy efficiency maximization in multicell MISO downlink,” 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015, pp. 13871391.
 [10] S. He, Y. Huang, L. Yang and B. Ottersten, “Coordinated Multicell Multiuser Precoding for Maximizing Weighted Sum Energy Efficiency,” IEEE Trans. Signal Process., vol. 62, no. 3, pp. 741751, Feb.1, 2014.
 [11] K. Shen and W. Yu, “Fractional Programming for Communication Systems—Part I: Power Control and Beamforming,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 26162630, May 15, 2018.
 [12] T. Wang and L. Vandendorpe, “On the Optimum Energy Efficiency for FlatFading Channels with Ratedependent Circuit Power,” IEEE Trans. Commun., vol. 61, no. 12, pp. 49104921, December 2013.
 [13] Z. Wang, I. Stupia and L. Vandendorpe, “Energy efficient precoder design for MIMOOFDM with ratedependent circuit power,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 18971902.
 [14] S. Boyd and L. Vandenberghe, “Convex Optimization,” Cambridge University Press, 2004.
 [15] J. Gorski, F. Pfeuffer, and K. Klamroth, “Biconvex sets and optimization with biconvex functions: a survey and extensions,” Math. Meth. of OR, 66(3):373–407, 2007.

[16]
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed
optimization and statistical learning via the alternating direction method of
multipliers,”
Foundations and Trends in Machine Learning
, vol. 3, no. 1, pp. 1122, 2010.
Comments
There are no comments yet.