Minimization of Sum Inverse Energy Efficiency for Multiple Base Station Systems

09/10/2019 ∙ by Zijian Wang, et al. ∙ 0

A sum inverse energy efficiency (SIEE) minimization problem is solved. Compared with conventional sum energy efficiency (EE) maximization problems, minimizing SIEE achieves a better fairness. The paper begins by proposing a framework for solving sum-fraction minimization (SFMin) problems, then uses a novel transform to solve the SIEE minimization problem in a multiple base station (BS) system. After the reformulation into a multi-convex problem, the alternating direction method of multipliers (ADMM) is used to further simplify the problem. Numerical results confirm the efficiency of the transform and the fairness improvement of the SIEE minimization. Simulation results show that the algorithm convergences fast and the ADMM method is efficient.



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction and Motivation

Nowadays, energy efficiency (EE) for wireless communications is becoming a main economical and societal challenge [1]. EE maximization is a fractional programming problem which is typically solved by Dinkelbach’s algorithm [2, 3].

However, most of the works on EE are single-fraction problems. In the literature, solving a sum-fraction problem is far more difficult than the single-fraction one. For multiple-fraction problems, some specific forms (e.g., the max-min problem) were studied in [4]. A sum-fraction problem is shown to be NP-hard [5]. The methods for finding its global optimum are quite time demanding (e.g., using branch-and-bound search [6, 7, 8]).

To find stationary-point solutions of the sum EE maximization problem, successive convex approximation methods were used in paper [9] and a Lagrangian update approach was used in paper [10]. In paper [11]

, the authors proposed a quadratic transform to reformulate the sum-fraction problem into a bi-concave one. This method decouples the numerators and denominators by introducing only one variable vector. The resulted expression with seperate numerators and denominators is always more tractable for further analysis.

Inspired by the method in [11], in this paper, we propose another transform to deal with the sum-fraction minimization (SFMin) problem. As in [12] and [13], one may aim to minimize the sum of inverse EE for more tractable expressions or analysis. Our considered problem in this paper is to minimize the sum of inverse EE for a multiple base station (BS) system.

In fact, the sum-of-inverse minimization (SIMin) leads to more fairness than the sum maximization (SMax). For better understanding this, let us take the example of maximizing the sum of two variables, as


and minimizing their sum-of-inverse as


On the one hand, it is always true that which can be interpreted as the fact that the inverse of the mean of inverses is a lower bound of the mean. The bound is tight when .

On the other hand, denote the solutions of the two above problems as and respectively. We have and After some manipulations, we have Without loss of generality, assuming and , we have which implies the minimization problem obtains more fairness. Note that it does not mean that , which is full fairness. Thus, it achieves a tradeoff between fairness and overall performance.

For the scenarios where the number of terms is larger than 2, it is no longer true that SIMin always has a better fairness. However, with a small number of terms (e.g. less than

), it is true with a very high probability. We will illustrate this later in the numerical results.

In [11], a quadratic transform is used to solve the sum-fraction maximization problem. Although the difference between this work and our work is only maximization and minimization, the proposed transforms are quite different.

The contributions of this paper are as follows:

  • A sum energy-per-rate minimization problem is studied, which, to the best of the authors’ knowledge, has never been investigated in the literature. This problem has a better tradeoff between energy efficiency and fairness concern, which is a major difference from the sum rate-per-energy maximization problem. A particular advantage is that no user is inactive by considering this problem.

  • A novel method for solving the SFMin problem is proposed. The method decouples the numerators and the denominators, which makes it possible to optimize the numerator part and denominator part separately by using the alternating direction method of multipliers (ADMM) method. This is a general framework which can be used in other practical problems.

  • A closed-form solution is found by the Karush–Kuhn–Tucker (KKT) conditions, which gives more insight on the solution. The closed-form solution is due to the method we proposed, which decouples the numerators and the denominators.

Ii General models

In this section, we begin by introducing a framework with a general optimization problem, which will be used later in a particular system model in this paper.

Let us consider an SFMin problem expressed as:


where is the number of terms, is the variable vector whose domain is . and are functions of , always with positive values.

This SFMin problem cannot be solved by conventional Dinkelbach’s algorithm which is often used in fractional optimization. We propose a fraction transform to solve this problem. We name it ’fraction transform’ because there exist fraction terms. As can be seen later in this paper, this method enables to use the ADMM to implement the optimization distributedly and obtain a closed-form solution for each subproblem.

In the following theorem, we show that it has an equivalent problem, that is,


where is a newly introduced vector.

Theorem 1.

The solution of the minimization problem


where and are positive, is the same as


The following equation is always true:


Obviously, the optimal that minimize the left-hand side of (7) always minimize its right-hand side. This also minimizes because can always be adapted to


to force the square term in (7) to be zero.

Therefore, the solution for (5) (which is ) is always part of the solutions of (6) (which is ). ∎

As an intuition, vector acts as the variable in in Dinkelbach’s algorithm, where maximization of is assumed, to change the priorities of numerators and denominators.

Even though the numerators and the denominators are decoupled, if the problem in (4) is not convex, it is still difficult to solve. In the following theorem, the convexity of the problem in (4) under some condition is proved.

Theorem 2.

If is concave and is convex, then the problem in (4) is convex for given .


We will prove the convexity by proving its Hessian matrix is positive semidefinite [14].

The Hessian matrix of is


Because is positive and is negative semidefinite, is positive semidefinite.

The Hessian matrix of is


Because is positive and is positive semidefinite, is positive semidefinite.

Therefore, the objective funtion in (4) for given is convex because its Hessian matrix is positive semidefinite. ∎

From the analysis above, the problem can be solved in an alternating manner. The following convex problem is solved for a given :


and then


is updated.

This proposed fraction transform enables the ADMM method for solving (11), for example, as


Since the ADMM method is problem-specific, we leave the detailed analysis for the considered system model in the following sections.

Iii System model

Assume a multicell downlink scenario where the network has BSs and users as shown in Fig. 1. Each BS serves one user. All BSs share the same band, therefore introducing interference at the user side. The power gain from the -th BS to the -th user is denoted as .

In this system, is interpreted as the rate for user , is the power consumption of BS , and is the vector of transmit power of all BSs. To avoid confusion, we will replace by in the following.

Fig. 1: System model.

Denoting the -th element in as and the noise power as , the rate for user is expressed as


and the power consumption for BS is expressed as


where is the inverse of amplifier efficiency and is the circuit power of BS . From these expressions, we know that is linear; however, is not concave.

The sum inverse EE (SIEE) minimization problem can be formulated as


where is the domain of , which is the transmit power constraint.

Iv The solution of the minimization problem

In this section, the solution of the proposed method is studied, which is divided into three steps. First, the problem is reformulated to deal with the non-concavity of the rate functions. Second, the reformulated problem is solved by the ADMM method. The optimization can be implemented distributedly and different parts of the problem can be solved individually. Third, the closed-form solutions are obtained thanks to the convexity of the reformulated problem and the ADMM method.

Iv-a Problem reformulation

From Theorem 1, we have the following equivalent problem:


To tackle with the non-concavity of , we introduce the following corollary, which is a direct result of Corollary 2 of paper [11]:

Corollary 1.

If is decreasing, then


is equivalent with


Similarly with the update of , can be updated by .

Therefore, to minimize is equivalent with




which is biconcave w.r.t and . This means is biconvex due to Theorem 2. Therefore, the following problem is a multi-convex problem:


which is equivalent with problems (18) and (17). A partial minimum can be efficiently found by alternate convex search, which is to optimize one variable while fixing others [15].

Iv-B ADMM-based algorithm

The updates of and are straightforward. Therefore, we focus on the update of in the following.

For given , the problem is


Observing that is only a function of and each has its own power constraint, it reminds us to decouple the terms of and the terms of to optimize in a distributed manner. To this end, we use the ADMM method as stated in the following.

The augmented Lagrangian of (24) can be expressed as [16]


where if and otherwise.

So the scaled form of ADMM is


The update in (28) is straightforward. Therefore, we will study how to solve (26) and (27) in the following.

Iv-C Closed-form solutions

The Lagrangian of (26) can be written as


where is the power constraint for BS . The KKT condition is


which gives a closed-form solution as


Thanks to the ADMM method, the problem in (27) is now a constraint-free problem, as all constraints are on , not on . Because all are coupled, the optimization can only be implemented in a centralized manner. This unconstrained convex minimization can be solved by finding the stationary point, where the derivative w.r.t. in (27) is


Newton’s method for system of equations can solve the equations, where the -th equation is , which is the opposite of the left-hand side of (32) [14]. Note that the formula to update the solution is


where is and is the Jacobian matrix, whose -th entry is . To calculate the Jacobian matrix, some further manipulations and calculations are needed. We have


where .

By defining


we have


If , then


Thus, by using Newton’s method, the solution for (27) is found. The closed-form expression for each iteration in Newton’s method has been obtained.

V Algorithms

In this section, based on the analysis above, we propose and summarize the alternate convex search to solve the problem in Algorithm 1. We begin the algorithm by initializing the newly introduced and for reformulations, and and for the ADMM method. Convergence is guaranteed since the problem is a multi-convex problem.

  Initialize and ; initialize and ; initialize and ; set
  while true do
     while true do
        Update using (26)
        Update using (27)
         for each
        Update using (28)
        if  then
        end if
     end while
      for each
     if  then
     end if
  end while
Algorithm 1 Alternate convex search
Fig. 2: Comparison between Dinkelbach and the proposed algorithm.
Fig. 3: Fairness comparison: small range (5)
Fig. 4: Fairness comparison: medium range (10).
Fig. 5: Fairness comparison: large range (50).

Vi Numerical and simulation results

In this section, we will illustrate the theoretical results by means of numerical results and simulation results.

The numerical results in Fig. 2 show that the proposed fraction transform has convergence speed similar to that of Dinkelbach’s algorithm for a fractional programming. In particular, we minimize using Dinkelbach algorithm and the proposed algorithm respectively. Both algorithms obtain roughly the optimal solution within five iterations.

Fig. 6: Convergence speed of Newton’s method for different .
Fig. 7: The convergence of the primal residual of the ADMM in the proposed algorithm.
Fig. 8: The ratio between the value of objective function and the value of the additional term.

The numerical results in Fig. 3, Fig. 4, and Fig. 5 show the fairness comparison between SIMin and SMax. Three criteria of fairness are considered: Jain’s fairness, G’s fairness, and Bossaer’s fairness. Each term in the sum maximization problem (like and in (1

)) is chosen from a random variable uniform between

and a maximum value equal to respectively , , in different figures. The percentages that SIMin have better fairness are plotted versus the number of fractional terms. ’A better fairness’ means a larger value of a certain fairness criterion. It is observed that, for a small range of , SIMin is better than Jain’s fairness and G’s fairness with high probability. With Bossaer’s fairness, it is better from 2 terms to 12 terms. As the dynamic range increases, the possibility that SIMin has a better fairness decreases. However, it can be observed that, when the number of terms is less than around 15, SIMin is better than the SMax for all ranges. Please note that a range of , which, under the context of EE, means one user’s EE is times the other one’s, is already quite large. One can refer to Fig. 9 and Fig. 10 as examples, which show that the highest EE is around times the lowest one. Therefore, we can conclude that for two terms, it is mathematically proven that SIMin is always better than SMax. For less than 15 terms, these numerical results show that the fairness of SIMin is most of the time better than the one of SMax.

Next we illustrate our simulation results based on the system model. The system parameters are set as follows: the bandwidth for each subcarrier is set to KHz, mW. The power constraints are mW. We also select the following values for the channel modeling: , dB, where dB is the gain factor at d = 1m and dB [13]. The noise power spectral density is set to dBm/Hz, the noise figure to dB/Hz and the inverse of amplifier efficiency is chosen to be .

The most complex procedure is Newton’s method to solve (27). Fig. 6 shows the comparison of convergence speed of Newton’s method for different . It is observed that the algorithm converges within few iterations. The convergence for large values of is also sufficiently fast.

The efficiency of ADMM method also needs to be validated.

The primal residual should converge to a small value. This is illustrated in Fig. 7. We observe that, for various numbers of BSs, the primal residual is much smaller than , meaning and are close enough in their respective subproblems.

The value of the original objective function should be sufficiently larger than the additional term introduced for the convergence of and . This is validated in Fig. 8. We observe that, for various numbers of BSs, the value of the original objective function is more than 100 times larger than the additional term.

Fig. 9: Comparison of transmit power, individual IEE, and sum IEE for 2 users.
Fig. 10: Comparison of transmit power, individual IEE, and sum IEE for 3 users.

In Fig. 9 and Fig. 10, we compare the proposed optimization with rate maximization for 2 and 3 users respectively. The performance improvement in terms of SIEE from rate maximization to SIEE minimization is significant. It is assumed that user 1 has a weak channel and user 2 and 3 have stronger channels. As shown in the figure, this improvement comes mainly from the user with a worse channel (user 1 in these figures). This confirms the fairness improvement by the SIEE minimization, which reduces the difference of the values of IEE among users. This improvement is achieved by reducing the transmit power of users with better channels, which is an essential observation for interference channels: reducing transmit power of users with good channels does not influence much of its own EE, but improves the EE of users with weak channels.

Vii Conclusion

In this paper, a framework of solving SFMin problems is proposed and a SIEE minimization problem is solved for multiple BS systems. Two new vector variables are introduced to reformulate the original problem into a multi-convex problem. The ADMM is used to further simplify the problem to obtain closed-form solutions. Numerical results confirm the fairness improvement of SIMin compared with SMax. Simulation results show that the algorithm convergences fast and the ADMM method is efficient. The EE performance outperforms the conventional rate maximization.


This work was supported by FNRS (Fonds National de la recherche scientifique) under EOS project Number 30452698. The authors would like to thank UCL for funding the ARC SWIPT project.


  • [1] G. Li, Z. Xu, C. Xiong, C. Yang, S. Zhang, Y. Chen, and S. Xu, “Energy efficient wireless communications: tutorial, survey, and open issues,” IEEE Wireless Commun. Mag., vol. 18, no. 6, pp. 28-35, Dec. 2011.
  • [2] A. Zappone and E. Jorswieck, “Energy efficiency in wireless networks via fractional programming theory,” Foundations Trends Commun. Inf. Theory, vol. 11, no. 3, pp. 185–396, Jun. 2015.
  • [3] C. Isheden, Z. Chong, E. Jorswieck, and G. Fettweis,“Framework for link-level energy efficiency optimization with informed transmitter,” IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2946–2957, Aug. 2012.
  • [4] J.-P. Crouzeix, “Algorithms for generalized fractional programming,” Mathematical Programming, vol. 52, no. 1, pp. 191–207, May 1991.
  • [5] R. W. Freund and F. Jarre, “Solving the sum-of-ratios problem by an interior-point method,” J. Global Optimization, vol. 19, no. 1, pp. 83– 102, 2001.
  • [6] H. P. Benson, “Solving sum of ratios fractional programs via concave minimization,” J. Optimization Theory Applicat., vol. 135, no. 1, pp. 1–17, Jun. 2007.
  • [7] T. Kuno, “A branch-and-bound algorithm for maximizing the sum of several linear ratios,” J. Global Optimization, vol. 22, pp. 155–174, 2002.
  • [8] N. T. H. Phuong and H. Tuy, “A unified monotonic approach to generalized linear fractional programming,” J. Global Optimization, vol. 22, pp. 229–259, 2003.
  • [9] O. Tervo, L. Tran and M. Juntti, “Decentralized coordinated beamforming for weighted sum energy efficiency maximization in multi-cell MISO downlink,” 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015, pp. 1387-1391.
  • [10] S. He, Y. Huang, L. Yang and B. Ottersten, “Coordinated Multicell Multiuser Precoding for Maximizing Weighted Sum Energy Efficiency,” IEEE Trans. Signal Process., vol. 62, no. 3, pp. 741-751, Feb.1, 2014.
  • [11] K. Shen and W. Yu, “Fractional Programming for Communication Systems—Part I: Power Control and Beamforming,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 2616-2630, May 15, 2018.
  • [12] T. Wang and L. Vandendorpe, “On the Optimum Energy Efficiency for Flat-Fading Channels with Rate-dependent Circuit Power,” IEEE Trans. Commun., vol. 61, no. 12, pp. 4910-4921, December 2013.
  • [13] Z. Wang, I. Stupia and L. Vandendorpe, “Energy efficient precoder design for MIMO-OFDM with rate-dependent circuit power,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 1897-1902.
  • [14] S. Boyd and L. Vandenberghe, “Convex Optimization,” Cambridge University Press, 2004.
  • [15] J. Gorski, F. Pfeuffer, and K. Klamroth, “Biconvex sets and optimization with biconvex functions: a survey and extensions,” Math. Meth. of OR, 66(3):373–407, 2007.
  • [16] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,”

    Foundations and Trends in Machine Learning

    , vol. 3, no. 1, pp. 1-122, 2010.