# Stabilization of Linear Systems Across a Time-Varying AWGN Fading Channel

This technical note investigates the minimum average transmit power required for mean-square stabilization of a discrete-time linear process across a time-varying additive white Gaussian noise (AWGN) fading channel that is presented between the sensor and the controller. We assume channel state information at both the transmitter and the receiver, and allow the transmit power to vary with the channel state to obtain the minimum required average transmit power via optimal power adaptation. We consider both the case of independent and identically distributed fading and fading subject to a Markov chain. Based on the proposed necessary and sufficient conditions for mean-square stabilization, we show that the minimum average transmit power to ensure stabilizability can be obtained by solving a geometric program.

## Authors

• 1 publication
• 12 publications
• 5 publications
• ### On Optimizing Power Allocation For Reliable Communication over Fading Channels with Uninformed Transmitter

We investigate energy efficient packet scheduling and power allocation p...
07/26/2018 ∙ by M. Majid Butt, et al. ∙ 0

• ### Second Order and Moderate Deviation Analysis of a Block Fading Channel with Deterministic and Energy Harvesting Power Constraints

We consider a block fading additive white Gaussian noise (AWGN) channel ...
04/09/2019 ∙ by Deekshith P K, et al. ∙ 0

• ### Remote estimation over a packet-drop channel with Markovian state

We investigate a remote estimation problem in which a transmitter observ...
07/25/2018 ∙ by Jhelum Chakravorty, et al. ∙ 0

• ### Remote State Estimation with Smart Sensors over Markov Fading Channels

We consider a fundamental remote state estimation problem of discrete-ti...
05/16/2020 ∙ by Wanchun Liu, et al. ∙ 0

• ### On a Class of Time-Varying Gaussian ISI Channels

This paper studies a class of stochastic and time-varying Gaussian inter...
01/13/2021 ∙ by Kamyar Moshksar, et al. ∙ 0

• ### Rate and Power Adaptation for Multihop Regenerative Relaying Systems

In this work, we provide a global framework analysis of a multi-hop rela...
06/15/2021 ∙ by Elyes Balti, et al. ∙ 0

• ### Optimal Power Control for Over-the-Air Computation in Fading Channels

In this paper, we study the power control problem for Over-the-air compu...
06/17/2019 ∙ by Xiaowen Cao, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The interaction between control and communication plays a crucial role in networked control systems, which have received considerable attention across the past decade. In particular, the problem of feedback stabilization in the presence of communication channel models has been widely studied in the literature (see, e.g., [1] which deals with the packet loss channel, [2, 3] which are concerned with an additive white Gaussian noise (AWGN) channel, [4, 5] which consider a fading channel and [6, 7] wherein a digital communication link with time-variant rate is considered).

Motivated by the fact that typically the channel gain varies at a much longer time scale than symbol transmission time [10], we consider a block-fading channel model in this work. Specifically, we investigate the minimum average transmit power required for mean-square stabilization of a discrete-time linear process across a time-varying AWGN block-fading channel that is presented from the sensor to the controller. We assume channel state information at both the transmitter and the receiver, and allow the transmit power to vary with the channel state. Both the case when the channel gain varies in an i.i.d. fashion among blocks and when it varies according to a Markov chain are considered. In either case, we provide a tight characterization of the minimum average transmit power required for stabilization. It is worth mentioning that generalization of results from fast-fading channel to block-fading channel is not trivial from the perspective of theoretical proof. The main contribution of this note is showing that by allowing power adaptation, the minimum transmit power to ensure stabilizability can be obtained by solving a geometric program. This reveals an interesting difference from the water-filling interpretation of distributing transmit power in time for achieving channel capacity [13], which arises from the fact that the objective here is stabilization rather than achieving capacity. To the best of our knowledge, this has never been proposed in the literature.

Notation:

, the differential entropy of is denoted by . The mutual information between two continuous random variables is denoted as . The expectation operator is denoted by , and the random variable over which the expectation is taken is usually clear from the context. The notation denotes the spectral radius of . The notation log denotes the natural logarithm.

## Ii Problem Formulation

Consider the closed-loop system depicted in Figure 1. The linear time-invariant process is represented as

 Z(k+1)=AZ(k)+BU(k), (1)

where is the state, and is the input. For simplicity, we assume that the matrix is in the Jordan form with only unstable modes, and the pair is controllable. Each component of the initial condition, , is randomly distributed with mean

and variance

.

As shown in Figure 1, the input and the output of the channel from encoder to decoder at time are denoted as and with where represents the attenuation gain due to the fading and

is a zero-mean Gaussian white noise process with variance

. The gain is allowed to be time-varying across time blocks of length . In other words, remains constant in blocks of time ,, …, and varies among these blocks. Assume that takes value in a finite set with . During the -th block, , we denote as where is a switching signal taking values in the set , and we say that the channel state is if . It is assumed that is switching either according to an i.i.d. process or as governed by a Markov chain across blocks. We make the additional simplifying assumption that .

At every time step , an encoder at the sensor calculates the channel input by a function . We assume that the encoder has access to one step delayed control input . This does not necessarily require the controller to send directly its output to the encoder since can be calculated via equation (1) from the observation of and , i.e., . It is also assumed that the encoder has access to one-step delayed decoder output. This can be realized by a perfect feedback channel from decoder to encoder, or by exploiting a smart controller that can send additional extractable information to the actuator. The constraint imposed on the encoder is through the average transmit power . Since the encoder has channel state information, we allow its transmit power to be adapted according to the channel state. Denote , and then the constraint becomes . The decoder is collocated with the controller and calculates the control input by a function . Note that and are allowed to be of any causal functions of all the available information till the time step .

Recall that the process (1) is said to be mean-square stabilizable if there exist an encoder-decoder pair and a controller such that for any initial condition . The question we raise is what is the minimum average transmit power that is required to stabilize the process (1) in the mean square sense with the problem formulation stated above and any design of the functions and ?

We will use the following result that converts the control problem to an estimation problem.

###### Lemma 1

[2] Let be the estimate of the initial state at the -th time step calculated by some decoder using the information it has access to. Denote the estimate error as . If

 E[ϵ(k)]=0, (2)
 (3)

then the process (1) can be stabilized in the mean square sense by the controller with chosen such that is Schur stable and . The function is hence given as .

Note that the condition (3) is merely an asymptotic condition. Thus, considering the subsequence , the condition (3) can be equivalently written as

 limj→∞Ajn−1E[ϵ(jn−1)ϵT(jn−1)](AT)jn−1=0. (4)

For brevity, we use the notation and to denote and . Let and represent the sequence of channel state from initial block to the -th block, i.e., and the collection of the observations of the decoder from time step to , i.e., , and we use and to denote their realizations.

## Iii The i.i.d case

In this section, we consider the case where the channel state is switching according to an i.i.d. process across the blocks. We start with the case when

and then consider the more general vector case.

### Iii-a Scalar systems

Consider the case when . The process with is described as

 Z(k+1)=λZ(k)+U(k). (5)

Let .

###### Theorem 1

Given , the process (5) is mean-square stabilizable across the time-varying AWGN fading channel if and only if

 (6)

Proof. ”” We prove the sufficiency by applying the Elias scheme in the context of feedback control that will generate the initial state estimate with error satisfying (2) and (4). Let us denote the variance of at the end of the -th block as .

During the initial block, , the channel state is . At time , the encoder transmits

, the decoder computes the unbiased estimate

, and therefore the variance of the estimate error is . At each time , the encoder transmits and the decoder computes the new unbiased estimate based on and by linear minimum mean square error (MMSE) estimation, i.e.,

 ^Z0(k)=^Z0(k−1)−E[Y(k)ϵ(k−1)]E[Y2(k)]Y(k). (7)

Accordingly, the variance of the estimate error at the end of the initial block is given by

 α(0)=Eσ(0)⎡⎢⎣σ2Z(0)Ng2σ(0)Pσ(0)⎛⎝Ng2σ(0)Pσ(0)+N⎞⎠n−1⎤⎥⎦.

During the -th block, , the encoder transmits at each time , and the decoder computes the new estimate based on and by the linear MMSE estimation (7). In this way, the recursion of is given by

It follows from (6) that the condition (4) is satisfied. Besides, since and the linear MMSE estimator is unbiased, the condition (2) is satisfied. Therefore, based on Lemma 1, the process (5) is mean-square stabilizable.

” The necessity is obtained via similar information-theoretic arguments in [6, 4, 3] with the differences caused by the block analog channel. First, let us define the conditional entropy power of conditional on the event , , and averaged over as . It follows from the maximum entropy theorem that Thus, a necessary condition for the mean-square stability of the system is . Next, we have

 N(j+1)=12πeEY(j+1)n−10,σj0[e2h(Z((j+1)n)|y(j+1)n−10,¯σj0)](a)=λ2n2πeEY(j+1)n−10,σj0[e2h(Z(jn)|y(j+1)n−10,¯σj0)](b)≥λ2n2πeEYjn−10,σj0  [e2EY(j+1)n−10,σj0|Yjn−10,σj0[h(Z(jn)|y(j+1)n−10,¯σj0)]]

where follows from the facts that and is a function of and , is based on the law of total expectation and Jensen’s inequality.

It can be observed that

 EY(j+1)n−10,σj0|Yjn−10,σj0[h(Z(jn)|y(j+1)n−10,¯σj0)]=h(Z(jn)|Y(j+1)n−1jn,yjn−10,¯σj0)=h(Z(jn)|yjn−10,¯σj0)−I(Z(jn);Y(j+1)n−1jn|yjn−10,¯σj0)

where the first equality follows from the definition of conditional entropy and the second equality follows from the definition of conditional mutual information. Since

, by the chain rule for mutual information, we have

 I(Z(jn);Y(j+1)n−1jn|yjn−10,¯σj0)=∑(j+1)n−1k=jnI(Z(jn);Yk|Yk−1jn,yjn−10,¯σj0).

Moreover, for each time step in the -th block, the random variable is a function of given and , and vice versa. This leads to

 ∑(j+1)n−1k=jnI(Z(jn);Yk|Yk−1jn,yjn−10,¯σj0)=∑(j+1)n−1k=jnI(Z(k);Yk|Yk−1jn,yjn−10,¯σj0)(c)≤∑(j+1)n−1k=jnI(Xk;Yk|Yk−1jn,yjn−10,¯σj0)(d)≤nC¯σ(j),

where is the channel capacity of the AWGN fading channel with gain , is obtained since form a Markov chain, and follows from the definition of Gaussian channel capacity. In view of independence between and , we have Thus, it can be obtained that

 EY(j+1)n−10,σj0|Yjn−10,σj0[h(Z(jn)|y(j+1)n−10,¯σj0)]≥h(Z(jn)|yjn−10,¯σj−10)−nC¯σ(j).

Consequently, it follows that

 N(j+1)≥λ2n2πeEYjn−10,σj0[e2h(Z(jn)|yjn−10,¯σj−10)−2nC¯σ(j)]=λ2n2πeEYjn−10,σj−10[e2h(Z(jn)|yjn−10,¯σj−10)]Eσ(j)[e−2nC¯σ(j)]=N(j)λ2nEσ(j)[(Ng2σ(j)Pσ(j)+N)n].

Hence, we can prove the necessity of condition (6) via contradiction since if the condition (6) is violated, then will not converge to zero.

###### Example 1

Consider the scalar process (5) with . The variance of the additive Gaussian noise and the length of the block is given by and , and the fading gain subject to an i.i.d. process has two states . Assume that . Given , according to Theorem 1, the maximum satisfying the condition in (6) is , and the average transmit power is . It will be seen later in Example 2 that the power allocation is not optimal for minimizing the average transmit power.

###### Remark 1

When and , the condition proposed in Theorem 1 coincides with the one provided in [4] which considers a fast-fading channel model. Theorem 1 is an important extension of [4] since under the setting of block-fading model, the transmit power can be adapted dynamically to the channel state. It will be shown in Section V that the minimum average transmit power can be significantly reduced when optimal power adaptation is adopted.

### Iii-B Vector systems

In this subsection, we consider the vector plant (1) with a specific time-division multiple access (TDMA) scheduling scheme to allocate channel resources among the subsystems.

Let

denote the eigenvalues of

counting algebraic multiplicity. Since has a Jordan form, each component of the initial state, denoted as , increases with rate dominated by an eigenvalue . We now present a TDMA scheduling strategy for the vector plant (1). We divide every block into equal-length time slots, and allocate each time slot of length periodically to each subsystem. For blocks with channel state we assign different power, , to the time slots allocated to subsystems associated with . For each , we restrict the set to satisfy

 |λ1|2(Ng2sPs,1+N)1l=⋯=|λl|2(Ng2sPs,l+N)1l. (8)
###### Theorem 2

Given and under the TDMA scheduling scheme with the constraint (8), the process (1) is mean-square stabilizable if and only if

 l∑i=1log|λi|<−l2nlog⎡⎢ ⎢⎣Eσ(j)l∏i=1⎛⎝Ng2σ(j)Pσ(j),i+N⎞⎠nl2⎤⎥ ⎥⎦. (9)

Proof. ”” We prove the sufficiency by showing that if the condition (9) holds under the described TDMA scheduling scheme with constraint (8), the condition (2) and (4) can be satisfied by adopting a similar encoder-decoder pair to the one used in the scalar case. Specifically, during the -th time slot of blocks with channel state , the encoder is scheduled to transmit with power suitable information of the initial state or the error , and the decoder is designed to update the -th component of while keeping the other components unchanged, i.e., , with initial value . The controller is chosen according to Lemma 1. First, from (9), we have , and it follows from (8) that for all . Since the number of channel use for each state component is in every block, it can be inferred based on the proof of Theorem 1 that for all . Second, let us denote the matrix . It follows that the condition (4) holds if all the diagonal entries of converge to as . For the case where has a diagonal structure, we have that for . For the general case where has a general Jordan form, all diagonal entries of can be proved converging to 0 using similar arguments in [2]. Besides, since and the linear MMSE estimator is unbiased, the condition (2) is satisfied. Therefore, we can conclude based on Lemma 1 that the process (1) can be stabilized in the mean square sense.

” The necessity can be validated by similar information-theoretic reasoning in the proof of Theorem 1 with the following modifications: 1) the conditional entropy power for is modified as , and , see [3] for details; 2) the term in and should be replaced by ; 3) the term in is equal to .

###### Remark 2

When , the condition (9) reduces to (6). It should be mentioned that such a scheduling scheme is not optimal in minimizing the average power required to stabilize the process (1). The reason lies in the fact that the channel capacity over a block with channel state using a constant power , , is larger than or equal to that with different power for each time slot, , subject to , which can be proved by the AM-GM inequality. Hence, it can be implied from (8) that when the distribution of is more centralized, the proposed TDMA scheduling scheme becomes less conservative, and it is nonconservative when .

## Iv The Markov chain case

In this section, we generalize the results in Section III to the scenario where the channel state varies according to a Markov chain. It is assumed that the Markov chain with finite states is irreducible with all its states positive recurrent and has a stationary transition matrix . We start with the case when .

###### Corollary 1

Given , the process (5) is mean-square stabilizable across the AWGN fading channel subject to the Markov chain if and only if

 λ2nρ(QTD)<1 (10)

where

Proof. ”” We prove the sufficiency by adopting the same coding scheme described in Theorem 1. Since the estimate at each time step is unbiased, the condition in (2) is satisfied. Denote the variance of at the end of the -th block given the channel state as . It follows from the proof of Theorem 1 that and the recursive equation for is given by Next, define a new variable , and it follows that where the sequence is a Markov chain. It can be observed by following Lemma 1 that if is mean-square stable, i.e., where the expectation is taken over the Markov chain , then the process (5) is stabilizable in the mean square sense. Moreover, based on the results on stability of discrete-time Markov jump linear system [14], it follows that is mean-square stable if and only if the condition (10) holds. Consequently, we can conclude that if the condition (10) holds, then the process (5) is stabilizable.

” Let us denote the conditional entropy power of conditional on the event , and averaged only over as . It can be shown following similar arguments of necessity proof of Theorem 1 that

 N(j+1)≥λ2n⎛⎝Ng2σ(j)Pσ(j)+N⎞⎠nN(j).

Then, by similar approach used in Theorem 1 in [5], the necessity of the condition (10) can be easily proved.
Next, by exploiting the same TDMA scheduling scheme with the constraint (8) described in Section III-B, the following result provides a necessary and sufficient condition for the stabilizability of the vector plant (1).

###### Corollary 2

Given and under the TDMA scheduling scheme with the constraint (8), the process (1) is mean-square stabilizable across the AWGN fading channel if and only if

 l∏i=1|λi|2nlρ(QTD)<1 (11)

with

Proof. This result can be obtained via combining the proof of Theorem 2 and Corollary 1.
By letting the transition matrix be composed of

identical row vectors, which represents the probability distribution of i.i.d process, the results in this section can be reduced exactly to the results in Section

III since the matrix in (10) and (11) are rank-one matrices whose only nonzero eigenvalue equal to and , respectively.

## V Minimum average transmit power

In this section, we show how to derive the minimum average power satisfying the conditions proposed in Section IV via convex optimization problems.

Since the Markov chain is irreducible with all its states positive recurrent, we have that has a unique stationary distribution, denoted by the vector . Let denote the number of visits to channel state up to time , and therefore the average occupation time of state up to time is . Due to the fact that as , the average power is given by

Let us start with the case of scalar plants. It should be observed that minimizing subject to the constraint (10) is a difficult problem given the complexity of the structure in the constraint (10). The following result provides an equivalent condition for stabilizability of the process (5).

###### Lemma 2

The following statements are equivalent:

1. The process (5) is mean-square stabilizable across the AWGN fading channel subject to the Markov chain .

2. where is defined in Corollary 1.

3. There exist such that

 Vs−m∑r=1qrsVrλ2n(Ng2sPs+N)n>0,s=1,…,m. (12)

Proof. 12 is obtained from Corollary 1. Let us prove . First, it can be inferred from the sufficiency proof of Corollary 1 that the condition is necessary and sufficient for mean-square stability of the Markov jump linear scalar system where the sequence is a Markov chain. Then, according to Theorem 2 in [14], it is obtained that the above scalar system is mean-square stable if and only if the condition in (12) is satisfied.

The following result provides a solution to derive the minimum average power via a geometric program.

###### Theorem 3

Let be the optimal solution of the geometric program

 inf¯Ps>0,Vs>0,s=1,…,m∑ms=1πsg2s¯Ps    s.t.{λ2nNn¯P−nsV−1s(∑mr=1qrsVr)<1N¯P−1s≤1, s=1,…,m. (13)

Then, the minimum average power required to stabilize the process (5) across the AWGN fading channel subject to the Markov chain is given by .111 The actual average transmit power needed to stabilize the process should be with any since the feasible set in (13) is not compact.

Proof. First, based on Lemma 2, the minimum average power required to stabilize the process (5) is given by

 P∗=infPs,Vs,s=1,…,m∑ms=1πsPss.t.⎧⎪⎨⎪⎩Vs−∑mr=1qrsVrλ2n(Ng2sPs+N)n>0Ps≥0,Vs>0,s=1,…,m.

Next, by letting , the above optimization problem can be equivalently rewritten as

 inf¯Ps,Vs,s=1,…,m∑ms=1πs(¯Ps−Ng2s)s.t.⎧⎨⎩Vs−∑mr=1qrsVrλ2n(N¯Ps)n>0¯Ps≥N,Vs>0,s=1,…,m.

Since the value of is a constant, it can be removed from the objective function without affecting the optimal solution of the above optimization problem. Thus, the above optimization problem can equivalently solved by the geometric optimization problem (13).

For the special case where the sequence is switching according to an i.i.d. process, we have . It follows by the variable substitution