Wireless energy and information transfer in networks with hybrid ARQ

02/12/2019 ∙ by Mehdi Salehi Heydar Abad, et al. ∙ 0

In this paper, we consider a class of wireless powered communication devices using hybrid automatic repeat request (HARQ) protocol to ensure reliable communications. In particular, we analyze the trade-off between accumulating mutual information and harvesting RF energy at the receiver of a point-to-point link over a time-varying independent and identically distributed (i.i.d.) channel. The transmitter is assumed to have a constant energy source while the receiver relies, solely, on the RF energy harvested from the received signal. At each time slot, the incoming RF signal is split between information accumulation and energy accumulation with the objective of minimizing the expected number of re-transmissions. A major finding of this work is that the optimal policy minimizing the expected number of re-transmissions utilizes the incoming RF signal to either exclusively harvest energy or to accumulate mutual information. This finding enables achieving an optimal solution in feasible time by converting a two dimensional uncountable state Markov decision process (MDP) with continuous action space into a countable state MDP with binary decision space.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In simultaneous wireless information and power transfer (SWIPT), the incoming RF signal is used for both energy harvesting and decoding of information bits. The concept was first introduced by Varshney in [1], characterizing the rates at which energy and reliable information can be transferred over a single point-to-point noisy link. It was later extended for frequency-selective channels with additive white Gaussian noise (AWGN) in [2]. In [3], the authors examined separated and co-located information and energy receivers in a multiple-input multiple-output (MIMO) wireless broadcast system. Specifically, for the co-located receiver case, two practical designs are investigated, namely time-switching (TS) and power splitting (PS). In TS policies, the incoming RF signal is either entirely utilized for energy or information purposes, whereas in PS policies the incoming signal is divided into two streams; one stream being utilized for energy and the other for information.

In [4], the optimal PS policy at the receiver was characterized to balance various trade-offs between the maximum ergodic capacity and the maximum average harvested energy in a single-input-single-output system. In addition, the optimal TS policy at the receiver is characterized for a point-to-point link over a narrow band flat-fading channel in [5].

In inherently error-prone wireless communications systems, re-transmissions, triggered by decoding errors, have a major impact on the energy consumption of wireless devices. Hybrid automatic repeat request (HARQ) schemes are frequently used to reduce the impact of re-transmissions by controlling them using various channel coding techniques. Nevertheless, this reduction comes at the expense of extra processing energy associated with the enhanced error-correction decoders. A receiver employing HARQ encounters two major energy consuming operations: (1) sampling or Analog-to-Digital Conversion (ADC), which includes all RF front-end processing, and (2) decoding. The energy consumption attributed to sampling, quantization and decoding plays a critical role in energy-constrained networks which makes their study a non-trivial problem. The work in [6] investigated the performance of HARQ over an RF-energy harvesting point-to-point link, where the power transfer occurs over the downlink and the information transfer over the uplink. The authors studied the use of TS when two HARQ mechanisms are used for information transfer; Simple HARQ (SH) and HARQ with Chase Combining (CC) [7]. Recently, [8] studies the performance of HARQ in RF energy harvesting receivers. Particularly, the receiver employs a specific time switching policy to either harvest energy or accumulate mutual information in order to minimize the number of re-transmissions. However, it does not consider an accurate model for the energy consumption of the receiver components.

In this work, we consider a point-to-point link where a transmitter employs HARQ to deliver a message reliably to the receiver. The receiver has no energy source, and thus, it relies on harvesting RF energy from the same signal bearing information. The channel is time-varying where the amount of energy harvested and information collected varies depending on the quality of the channel. The receiver aims to split the incoming RF signal between energy harvesting and information decoding so that the expected number of re-transmissions is minimized. We develop a novel Markovian framework to prove that the optimal policy is a TS policy. As a consequence of this finding, we convert a two dimensional uncountable state Markov decision process (MDP) with continuous action space into a countable state MDP with binary decision space, and thus enabling us to use value iteration algorithm (VIA) to obtain the minimum expected number of re-transmissions in feasible time. Through numerical results, we show that the optimal policy is not unique and propose three heuristic policies to achieve the same performance as VIA.

Ii System Model

Ii-a Channel Model and Receiver Architecture

Consider a point-to-point time varying wireless link between a transmitter-receiver pair. The wireless channel is modeled according to an i.i.d. two-state block fading model where the states are GOOD and BAD. Note that the two-state channel process is an approximation of a more general multi-state time varying channel, where each state of the channel supports a maximum transmission rate. Here, we employ two-state channel process due to its analytical tractability. Let be the state of the channel at time slot where a BAD state is denoted by and a GOOD state is denoted by

. We let the probability that the channel is in a GOOD state be

, i.e., . Let be the instantaneous complex channel gain corresponding to state . We assume that the channel state information (CSI) is neither available at the transmitter nor the receiver due to the high computational and energy costs of transmitting and receiving a pilot signal necessary for measuring the CSI.

We consider a communication scheme where the transmitter is connected to a power source with an unlimited energy supply. The receiver is equipped with separate rectifier circuit for EH and a transceiver for information decoding (ID), both connected to the same antenna. We consider a co-located EH and ID architecture in which the EH and ID circuits share the same antenna to enable a compact structure. The incoming RF signal is fed to the EH and ID circuits according to time switching (TS) and power splitting (PS) architectures.

Time is slotted and each slot has a length of channel uses. We assume that is sufficiently large so that we can apply information theoretic arguments. The instantaneous achievable rate of the receiver is the maximum achievable mutual information between the output symbols of the transmitter and input symbols at the receiver. Let us denote the achievable rate of the receiver by at time . As , approaches the Shannon rate, and it can be computed as:


where is the channel gain at time and is the noise-normalized transmit power of the transmitter. Let and be the achievable rates corresponding to channel states GOOD and BAD, respectively. In particular


Since the instantaneous channel states are not known prior to transmission, for reliability, we employ an HARQ scheme based on mutual information, namely HARQ with incremental redundancy (IR) [9]. Let us denote a message of the transmitter by , where denotes the rate of the information. Every incoming transport layer message into the transmitter is encoded by using a mother code of length channel uses. The encoded message, , is divided into blocks, each of length channel uses, with a variable redundancy and it is represented by . Let us assume that is transmitted at . If is successfully decoded, then the receiver sends a 1-bit, error-free, zero-delay, Acknowledgement (ACK) message, otherwise, the transmitter times out after waiting a certain time period. In case of no ACK received, the transmitter transmits at time slot and the receiver combines the previous block with . This procedure is repeated until the receiver accumulates bits of mutual information or maximum blocks of information, , is sent. We assume that, is chosen sufficiently large so that the probability of decoding failure, due to exceeding the maximum number of re-transmissions, is approximately equal to zero. With HARQ-IR scheme, after re-transmissions, the amount of accumulated mutual information at the receiver is . The receiver, given that it has sufficient energy, can perform a successful decoding attempt after re-transmissions, if the amount of accumulated mutual information exceeds the information rate of the transmitted message, i.e., . We assume that each message is encoded at rate i.e., so that a transmission in a GOOD channel state carries all the information needed for decoding

Ii-B Energy Harvesting and Consumption Model

In the following, we assume that the receiver has a sufficiently large battery and memory, so that there is no energy or information overflow. The receiver utilizes a power splitting policy, where denotes the power splitting parameter at the beginning of time slot . Note that indicates that the received signal is used solely for mutual information accumulation, and indicates that the received signal is used solely for harvesting energy. Any refers to the case where the received signal is used for both harvesting energy and mutual information accumulation. Note that TS can be considered as a special case of PS with .

We incorporate a simplified EH model, which facilitates the formulation of a tractable optimization problem. In this model, the receiver harvests a maximum of energy units in the GOOD state and zero units during the BAD state111The maximum energy is harvested if the received signal is completely directed to the EH circuit, i.e., .. The reason that no energy can be harvested during a BAD state is because in a typical EH device there are two stages: a rectifier stage that converts the incoming alternating current (AC) radio signals into direct current (DC); and a DC-DC converter that boosts the converted DC signal to a higher DC voltage value. The main limitation in an EH device is that every DC-DC converter has a minimum input voltage threshold below which it cannot operate. Hence, when the channel is in a BAD state, the input voltage is below the threshold of the DC-DC converter so no energy can be harvested. Albeit the receiver cannot harvest any RF energy in a BAD state, it can still accumulate mutual information since ID circuit operates at a lower power sensitivity, e.g., dBm for EH and dBm for ID circuits [10].

The energy consumption of HARQ was recently investigated in [11]. The energy is consumed at the start up of the receiver, during decoding, for operating passband receiver elements (low-noise amplifiers, mixers, filters, etc.), and for providing feedback to the transmitter. In order to develop a tractable analytical model, we combine the individual costs of energy into two parameters only: the receiver consumes energy units for a decoding attempt and 1-energy unit for each mutual information accumulation event per time slot222One energy unit is normalized to the energy cost of operating the RF transceiver circuit during one time slot., i.e., operating the passband receiver elements.

Iii Expected Number of Re-transmissions

The receiver requires at least units of energy and bits of information before it can successfully decode the transmitted packet. The objective is to optimally determine the power splitting ratio between EH and ID so that the transmission is successfully decoded with minimum delay at the receiver. Note that depends on the current battery level, , and the amount of information accumulated, .


A scheduling policy is a sequence of decision rules as such the th element of determines the power splitting ratio at th time slot based on the observed system state at the beginning of this time-slot for . Similarly, a tail scheduling policy is a sequence of decision rules that determines the for the time slots from to .

The problem can be mathematically modeled as a two-state Markov chain. Let the states of the Markov chain be

, where is the total residual battery level and is the total accumulated mutual information normalized by . For clarity of presentation, in the rest of the paper, we assume that .

Iii-a Dynamic Programming Formulation

Let be an indicator function taking a value of if the message can be decoded at the end of slot under policy , and a value of otherwise. Then, the optimization problem we aim to solve is given as,


Let be the expected discounted reward with initial state under policy with discount factor . The expected discounted reward has the following expression


where is the expectation with respect to the policy , is the time index, is the action chosen at time , and is the instantaneous reward acquired when the current state is .

In the rest of the paper, we use and interchangeably by assuming that at time slot , the system is at state . The battery is recharged with incoming RF signal depending on the value of the power split ratio . Meanwhile, one unit of energy is consumed in order to accumulate non-zero bits of mutual information. Hence, the evolution of the battery state is characterized as follows:


where , if , and , otherwise333When , the receiver consumes 1 unit of energy to operate its transceiver..

According to (2) and (3), the transmit power is equal to . At the power splitter, portion of the received power is directed into the ID, so the maximum achievable mutual information accumulation is:


Inserting the value of in (7) for GOOD and BAD channel states gives the mutual information accumulation in these states respectively for a given power splitting ratio as


Thus, the accumulated mutual information, , evolves as:


The instantaneous reward is zero if the message can be correctly decoded, and it is minus one otherwise. Note that the decoding operation is successful if and only if the accumulated mutual information is above a certain threshold, and the battery level is sufficient to decode the message. Hence, the instantaneous reward is given as follows:


Define the value function as


The value function satisfies the Bellman equation


where is the cost incurred by taking action when the state is and is given by


where is the next visited state and the expectation is over the distribution of the next state. The use of expected discounted reward allows us to obtain a tractable solution, and one can gain insights into the optimal policy when is close to

. Then, one can apply VIA to obtain the optimal discounted reward. However, this problem suffers from the curse of dimensionality as it is a two dimensional uncountable state Markov decision process (MDP) with continuous actions at every state. Also, letting

, to approximate the average reward, extremely slows the algorithm to the point of infeasibility [12]. Hence, in the following, we propose a novel approach to gain insights into the structure of the optimal policy.

Iii-B Absorbing Markov Chain Analysis

The Markov chain describing the operation of our system is an absorbing Markov chain, where all states except those where , and are transient states. The absorbing states are those where the receiver has both sufficient energy and information accumulated to correctly decode. In an absorbing Markov chain, the expected number of steps taken before being absorbed in an absorbing state characterizes mean time to absorption. Hence, mean time to absorption starting in a given transient state provides the number of re-transmissions until successful decoding starting from this state. It should be noted that the receiver is blind to the CSI before choosing the power splitting ratio. However, after it decides to sample the incoming RF signal for mutual information accumulation, based on the received power, the amount of the information in the sampled portion of the RF signal is revealed to the receiver.

In a finite absorbing chain, starting from a transient state, the chain makes a finite number of visits to some transient states before its eventual absorption into one of the absorbing states. Hence the mean time to absorption of the chain, starting from transient state initially, is the sum of the expected numbers of visits made to transient states. In the following, we perform first-step analysis, by conditioning on the first step the chain makes after moving away from a given initial state to obtain the mean time to absorption. Let be the expected number of transitions needed to hit an absorbing state when the Markov chain starts from state .

Let us first consider the trivial case when the battery has less than one unit of energy, i.e., , in which case the receiver must harvest the incoming RF signal. In this case, the mean time to absorption starting from an initial state is


Note that in (15), one slot is needed to harvest energy, and depending on the channel state in that slot, the battery state either transitions to or remains the same. Similarly, if the amount of accumulated mutual information is , there is no point in further accumulating mutual information since the receiver has sufficient mutual information to decode the incoming packet. Hence,


The following lemma plays an important role in establishing the structure of the optimal policy.

Lemma 1.

For any such that , given that , the mean time to absorption is given by, .


The proof is by induction. For the base case assume that the claim is true for such that . Note that since , the optimal decision is to use incoming RF signal only for harvesting energy, i.e., . Thus,


For , if the channel is GOOD then the Markov chain transitions into the absorbing state state , so . Hence, and thus, the lemma holds for . In the induction step assume that the lemma is true for some , i.e., for . The mean time to absorption for the case is:


which reduces to for . Thus, the lemma holds by induction. ∎

We will use Lemma 1 to show that the optimal policy minimizing the mean time to absorption does not split the incoming RF signal. In order to show this, let us define two tail policies , taking different actions, in the current slot, but following the same set of actions, afterwards444Note that defines a tail policy obtained by concatenating action in the current slot with tail policy .. Let policy be a tail policy that always splits the incoming RF energy, i.e., , except when or , when it only harvests energy. Assume that the state of the system is at time slot . Then, the mean time to absorption for tail policy is:


where is the mean time to absorption of policy beginning at state . Note that with probability the channel is in GOOD state and hence units of energy is harvested555We assume that the energy harvesting circuit is generating energy linearly proportional to the energy of the incoming RF signal.. However, one unit of energy is spent by operating the transceiver to accumulate bits of mutual information. Meanwhile, with probability the channel is in BAD state, and no energy is harvested, but the transceiver still consumes one unit of energy while accumulating bits of mutual information. Meanwhile, under tail policy the RF signal is never split at time slot , but rather, it is completely used for mutual information accumulation except when or when it harvests energy only. One can calculate as follows:

Theorem 1.

There exists an optimal time switching (TS) policy minimizing the number of re-transmissions until successful decoding which only harvests energy or accumulates information at an arbitrary time slot.


Assume that at time slot the system is at state . Consider policy which always chooses . Hence, it follows that , and, from (10), we have . Also, it is easy to verify that for any , we have whenever . Thus, a lower bound on in (19) can be established as,


Furthermore, since , from Lemma 1, we know that . Hence, the lower bound in (21) is exactly the same as given in (20), i.e., . ∎

Theorem 1 proves that a time switching (TS) policy can achieve the minimum mean time to absorption. As a result, the state space of the discrete Markov chain associated with the optimal TS policy is 666Note that in reality the capacity of the battery is limited to , resulting in total number of states. , and . Thus, we have converted an uncountable state MDP with continuous actions (i.e., ) into a countable state MDP with binary decisions (i.e., ). Hence, the curse of dimensionality is lifted from the problem and we can use VIA to obtain the optimal TS decisions at each state for the reduced problem. Also, since the number of states is reduced dramatically, we can choose to approximate the average reward instead of the discounted reward. Note that applying the VIA to the original problem is not possible in feasible time due to the extreme complexity of the problem originated from uncountable states.

The TS structure of the optimal policy also encourages us to propose simple heuristic policies which is suitable for EH devices lacking the necessary computation power. Hence, we propose three simple to implement heuristic policies utilizing the TS structure. These policies are as follows:

  • Battery First (BF): the receiver harvests energy first until it acquires units of energy and then starts accumulating mutual information.

  • Information First (IF): the receiver always accumulates mutual information unless or .

  • Coin Toss (CT): the receiver harvests energy when or , while it accumulates mutual information when . Otherwise, it tosses a fair coin to choose between EH or ID.

In the following, we evaluate the performance of the optimal policy obtained by solving (14) and compare the result to that of heuristic policies.

Iv Numerical Results

In this section, we evaluate the minimum expected number of re-transmissions by maximizing the value function defined in (14) by the VIA and compare the values by those obtained by BF, IF and CT policies. To be able to approximate mean value by VIA, we choose . We calculate the expected number of re-transmissions by Monte Carlo (MC) simulations. We run Monte Carlo simulations for iterations and evaluate the sample mean.

Table I summarizes the expected number of re-transmissions for , , and with respect to associated with different policies. It can be seen from Table I that all policies achieve with a very close approximation, the same expected number of re-transmissions for common system parameters.

The effect of channel quality on the expected number of re-transmission for , , and with respect to is summarized in Table II. As expected, it can be seen that the expected number of re-transmissions decreases as the channel quality improves. This is because as the channel quality improves, the probability of harvesting energy and accumulating bits of mutual information also increases. Again, it can be seen that all policies have the same performance independent of the value of .

The results presented in Table I and II show that the optimal policy is not unique. To investigate this, we optimize values by VIA algorithm at each state for , , , and and represent the optimal values in Figure 1 and 2. Note that Figure 1 and 2 that are obtained by VIA, happen to be exactly the same as the BF and IF policies, respectively, where black holes represent absorbing states, blue squares represent , and red diamonds represent . Optimality of both Figure 1 and 2 means that for every and . By comparing Figure 1 and 2 it can be seen that the optimal policy should harvest energy whenever or , and it should accumulate mutual information whenever . Also, choosing between or does not alter the minimum expected number of re-transmission, whenever and , i.e., for and . Consequently, BF, IF, CT, and optimal policies achieve the same minimum expected number of re-transmissions.

VIA 15.9910 15.8103 15.6235 15.2490 14.4992
BF 15.9938 15.8116 15.6259 15.2504 14.4999
IF 15.9941 15.8143 15.6245 15.2508 14.4987
CT 15.9966 15.8140 15.6266 15.2491 14.5020
Table I: Expected number of re-transmissions for , and vs.
VIA 40.8904 20.7979 14.0320 10.5985 8.4989
BF 40.8920 20.7962 14.0337 10.6002 8.4995
IF 40.8978 20.7960 14.0331 10.5991 8.5002
CT 40.8961 20.8006 14.0333 10.5973 8.4986
Table II: Expected number of re-transmissions for , , and vs.
Figure 1: Optimal values obtained by VIA resembling the BF policy.
Figure 2: Optimal values obtained by VIA resembling the IF policy.

V Conclusion

We analyzed a point-to-point wireless link employing HARQ for reliable transmission, where the receiver can only empower itself via the transmitter’s RF signal. We modeled the problem of optimal power splitting by a Markovian framework, and proved that the optimal policy should be a TS policy and as a consequence, we converted a two dimensional uncountable state Markov chain into a two dimensional countable state Markov chain. Then, we used VIA to minimize the expected number of re-transmissions and through numerical results, we showed that the optimal policy is not unique. In the future, we aim to analytically characterize the structure of the optimal policy and to develop a low-complexity algorithm achieving the corresponding optimal performance. Additionally, we will extend the problem to the case of time-correlated channels.


  • [1] L. R. Varshney, “Transporting information and energy simultaneously,” in 2008 IEEE International Symposium on Information Theory.   IEEE, 2008, pp. 1612–1616.
  • [2] P. Grover and A. Sahai, “Shannon meets tesla: Wireless information and power transfer,” in ISIT, 2010, pp. 2363–2367.
  • [3] R. Zhang and C. K. Ho, “Mimo broadcasting for simultaneous wireless information and power transfer,” IEEE Transactions on Wireless Communications, vol. 12, no. 5, pp. 1989–2001, 2013.
  • [4] L. Liu, R. Zhang, and K. C. Chua, “Wireless information and power transfer: A dynamic power splitting approach,” IEEE Transactions on Communications, vol. 61, no. 9, pp. 3990–4001, Sept. 2013.
  • [5] Liu, Liang and Zhang, Rui and Chua, Kee-Chaing, “Wireless information transfer with opportunistic energy harvesting,” IEEE Transactions on Wireless Communications, vol. 12, no. 1, pp. 288–300, 2013.
  • [6] F. A. de Witt, R. D. Souza, and G. Brante, “On the performance of hybrid arq schemes for uplink information transmission with wireless power transfer in the downlink,” in IFIP Wireless Days.   IEEE, 2014, pp. 1–6.
  • [7] H. Chen, R. G. Maunder, and L. Hanzo, “A survey and tutorial on low-complexity turbo coding techniques and a holistic hybrid arq design example,” IEEE Communications Surveys Tutorials, vol. 15, no. 4, pp. 1546–1566, Fourth 2013.
  • [8] M. Zohdy, T. ElBatt, M. Nafie, and O. Ercetin, “Rf energy harvesting in wireless networks with harq,” in IEEE Globecom Workshops, Dec 2016, pp. 1–6.
  • [9] S. B. Wicker, Error control systems for digital communication and storage.   Prentice hall Englewood Cliffs, 1995, vol. 1.
  • [10] X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks with rf energy harvesting: A contemporary survey,” IEEE Communications Surveys Tutorials, vol. 17, no. 2, pp. 757–789, Secondquarter 2015.
  • [11] F. Rosas, R. D. Souza, M. E. Pellenz, C. Oberli, G. Brante, M. Verhelst, and S. Pollin, “Optimizing the code rate of energy-constrained wireless communications with harq,” IEEE Transactions on Wireless Communications, vol. 15, no. 1, pp. 191–205, 2016.
  • [12] M. S. H. Abad, O. Ercetin, and D. Gündüz, “Channel sensing and communication over a time-correlated channel with an energy harvesting transmitter,” IEEE Transactions on Green Communications and Networking, vol. PP, no. 99, pp. 1–1, 2017.