Stabilizing a linear system using phone calls

04/01/2018 ∙ by Mohammad Javad Khojasteh, et al. ∙ berkeley college 0

We consider the problem of stabilizing an undisturbed, scalar, linear system over a "timing" channel, namely a channel where information is communicated through the timestamps of the transmitted symbols. Each transmitted symbol is received at the controller subject to some to random delay. The sensor can encode messages in the holding times between successive transmissions and the controller must decode them from the inter-reception times of successive symbols. This setup is analogous to a telephone system where a transmitter signals a phone call to the receiver through a "ring" and, after a random time required to establish the connection, is aware of the "ring" being received. We show that for the system to converge to zero in probability, the timing capacity of the channel should be at least as large as the entropy rate of the system. In the case the symbol delays are exponentially distributed, we show a tight sufficient condition using a random-coding strategy. Our results generalize previous event-triggering control approaches, revealing the limitation of using timing information for stabilization, independent of any transmission strategy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A networked control system with a feedback loop over a communication channel provides a first-order approximation of a cyber-physical system (CPS) [1, 2]. In this setting, data-rate theorems quantify the impact of the communication channel on the ability to stabilize the system. Roughly speaking, these theorems state that to achieve stabilization the communication rate available in the feedback loop should be at least as large as the intrinsic entropy rate of the system, expressed by the sum of its unstable modes [3, 4, 5, 6].

We consider a specific communication channel in the loop — a timing channel. Here, information is communicated through the timestamps of the symbols transmitted over the channel; the “time” is carrying the message. This formulation is motivated by recent works in event-triggering control, showing that the timing of the triggering events carries information that can be used for stabilization [7, 8, 9, 10, 11, 12]. However, while in these works the timing information is not explicitly quantified, our goal is to precisely determine what is the value of a time-stamp from an information-theoretic perspective, when this is used for control.

We consider stabilization of a scalar, undisturbed, continuous-time system over a timing channel and rely on work in information theory that defines the timing capacity of the channel, namely the amount of information tha can be encoded using time stamps [13, 14, 15, 16]. In this setting, the sensor can communicate with the controller by choosing the timestamps at which symbols from a unitary alphabet are transmitted. The controller receives each transmitted symbol after a random delay is added to the timestamp. We assume that encoding and decoding occur at a faster time scale than that of the controller, and we show that in order to achieve stabilization the timing capacity must be proportional to the entropy-rate of the system, with a proportionality factor of at least one, that accounts for the difference in time scales. In the case the random delays are exponentially distributed, we show that a random coding strategy can be used to achieve this bound. While our analysis is restricted to transmitting symbols from a unitary alphabet, it is natural to extend this and develop “mixed” strategies that use both timing information and data payload, as in event-triggered control. Finally, our results show that state-dependent triggering is only one of many possible strategies to encode information in time, thus opening several new venues for future investigation.

1.1 Background

The books [3, 4, 17] and the surveys [5, 6] provide detailed discussions on data-rate theorems and related results. A portion of the literature studied stabilization over “bit pipe channels," where a rate-limited, possibly time-varying, noiseless communication channel is present in the feedback loop [18, 19, 20, 21, 22]. In the case of noisy channels, Tatikonda and Mitter [23] showed that for almost sure (a.s.) stabilization of undisturbed linear systems the Shannon capacity of the channel should be larger than the entropy rate of the system. Matveev and Savkin [24] showed that this condition is also sufficient for discrete memoryless channels, but a stronger condition is required in the presence of disturbances, namely the zero-error capacity of the channel must be larger than the entropy rate of the system [25]. Nair [26] derived a similar information-theoretic result in a deterministic setting. Sahai and Mitter [27]

considered the less stringent requirement of moment-stabilization over noisy channels and in the presence of system disturbances, and provided a data-rate theorem in terms of the anytime capacity of the channel 

[28, 29, 30, 31, 32].

Another important aspect of CPS is event-triggered control [33, 34]. One primary focus of event-triggered control is on minimizing the number of transmissions while simultaneously ensuring the control objective [35, 36]. In this context, the works in [7, 8, 9, 10, 11, 12] show that the timing of the state-dependent triggering events carries information that can be used for stabilization. The amount of timing information, is sensitive to the delay in the communication channel. While for small delay stabilization can be achieved with data-rate arbitrarily close to zero, for large values of the delay this is not the case and the data-rate must be increased [8, 12]. In this context, our work explicitly quantifies the value of the timing information, independent of any transmission strategy, and also shows its dependence on the random delay, which plays the role of the channel noise in an information-theoretic setting.

In the remainder of the paper, Section 2 formulates the problem, Section 3 describes our results, and Section 4 discusses further related work. Section 5 provides proofs, while section 6 concludes with some open problems.

1.2 Notation

Let

denote a vector of random variables and let

denote its realization. If the are independent and identically distributed (i.i.d), then we refer to a generic by and skip the subscript . We use and to denote the logarithms to base and respectively. We use

to denote the Shannon entropy of a discrete random variable

and

to denote the differential entropy of a continuous random variable

. Further, we use and for the mutual information between random variables and . We will write if converges in probability to . Similarly, we will write if converges almost surely to .

2 Problem formulation

We consider the networked control system depicted in Fig. 1.

Figure 1: The stabilization problem

The system dynamics are described by a scalar, continuous-time, noiseless, linear time-invariant (LTI) system

(1)

where and are the system state and control input respectively. The constants such that and . The initial state, , is random and is drawn from a distribution with bounded support, such that and . Conditioned on the realization of , the system evolves deterministically. The controller has knowledge of system dynamics in (1).

We assume the sensor/encoder can measure the state of the system with infinite precision, and the controller can apply the control input to the system with infinite precision and with zero delay. The sensor/encoder is connected to the controller through a timing channel (or the telephone signaling channel in [13]) as follows.

The encoder can choose to transmit the symbol at fixed times to the controller. This symbol is delivered to the controller after a random delay. The sensor/encoder receives an instantaneous but causal acknowledgement when the symbol is delivered, similar to [27, 37].

The encoder uses a “waiting time” to encode information, i.e., after the has been received by the controller, the sensor waits for seconds to transmit the next symbol. We assume that the channel is initialized with a symbol received at .

The analogy is with a telephone system where a transmitter signals a phone call to the receiver through a “ring” and, after a random time required to establish the connection, is aware of the “ring” being received. Communication between transmitter and receiver can then occur without any vocal exchange, but encoding messages in the waiting times between consecutive calls.

Let be the inter-reception time between two consecutive symbols, i.e.,

(2)

where are random delays that are assumed to be i.i.d. Fig. 2 provides an example of the timing channel in action.

We assume the use of a random codebook, namely the holding times used to encode any given message are i.i.d. and also independent of the random delays . This assumption is made to simplify our stability analysis, and does not change the capacity of the communication channel.

We assume the blocklength of a codeword is , i.e. the decoder will use a set of timestamps to decode. The reception time of the last symbol is given by . We are interested in stabilizing the system in probability at a sequence of times such that

(3)

i.e. we want as .

Figure 2: The timing channel. Subscripts and are used to denote sent and received symbols, respectively.
Figure 3:

The estimation problem.

The following definitions are derived from [13], incorporating our random coding assumption.

Definition 1.

A -iid-feedback-timing code for the telephone signaling channel consists of a codebook of codewords , , where the symbols in each codeword are picked i.i.d. from a common distribution as well as a decoder, which upon observation of selects the correct transmitted codeword with probability at least . Moreover, the codebook is such that the expected random arrival time of the symbol, given by , is not larger than ,

(4)
Definition 2.

The rate of an -iid-feedback-timing code is

(5)
Definition 3.

The timing capacity of the telephone signaling channel is the supremum of the achievable rates, namely the largest such that for every there exists a sequence of -iid-feedback-timing codes that satisfy

(6)

and as .

The capacity definition in [13] is slightly more general and does not include a random coding assumption. However, the following result  [13, Theorem 8] applies to our random coding set-up, since the capacity in [13] is achieved by random codes.

Theorem 1 (Anantharam and Verdú).

The timing capacity of the telephone signaling channel is given by

(7)

and if is exponentially distributed then

(8)

3 Statement of the results

In order to derive necessary and sufficient conditions for the stabilization problem depicted in Fig. 1, we first consider the estimation problem depicted in Fig. 3. Consider the system

(9)

We want to obtain an estimate of the state , given the reception of symbols over the telephone signaling channel, such that . As before, we assume that the encoder has causal knowledge of the reception times via acknowledgements through the system, and estimation occurs at times that satisfy (3). Our first result states that the stabilization problem depicted in Fig. 1 is equivalent to the estimation problem depicted in Fig. 3.

Theorem 2.

Consider the stabilization of the system (1) and the estimation of the system (9) over a timing channel, as in (2), at a sequence of times satisfying (3). Then, a controller such that exists in the stabilization problem if and only if there exists an encoder-decoder pair in the estimation problem such that .

The next theorem provides a necessary rate for the state estimation problem.

Theorem 3.

Consider the estimation problem depicted in Fig. 3 with system dynamics (9). Consider transmitting symbols over the telephone signaling channel (2), and a sequence of estimation times satisfying (3). If , then

(10)

The entropy-rate of the system is given by nats/time [38]. Setting our result recovers a scenario that parallels the data-rate theorems, stating that the mutual information between an encoding symbol and its received noisy version should be larger than the average “information growth” of the state during the inter-reception interval , which is given by

(11)
Figure 4: Typical realization of codeword transmission for large values of .

In this case, using (7) we also obtain from (10) that

(12)

On the other hand, for our result shows that we must pay a penalty of in the case that there is a time lag between the reception time of the last symbol in the codeword and the observation time , see Fig. 4. Using (7), in this case we obtain from (10) that

(13)

The case requires transmission of a codeword carrying an infinite amount of information over a channel of infinite capacity, thus revealing the initial state of the system with infinite precision. Once this state is known to the controller, the system can be kept stable w.h.p., even at observation times that are arbitrarily far in the future. This case is equivalent to transmitting a single real number over a channel without error, or a single symbol from a unitary alphabet with zero delay.

The final theorem provides a sufficient condition for convergence of the state estimation error to zero in probability in the case of exponentially distributed delays.

Theorem 4.

Consider the estimation problem depicted in Fig. 3 with system dynamics (9). Consider transmitting symbols over the telephone signaling channel (2). Assume are drawn i.i.d. from exponential distribution with mean . If the capacity of the timing channel, , is at least

(14)

then for any sequence of times that satisfies (3), we can compute an such that:

(15)

4 Related work

We now discuss some related work in more detail. First, Tatikonda and Mitter in [23] considered the problem of stabilization of the discrete-time version of the system in (1) over an erasure channel. In their model, at each time step of the system’s evolution the sensor transmits a packet of bits to the controller and this is delivered with probability , or it is dropped with probability . It is shown that a necessary condition for is

(16)

For in Theorem 3 and using Theorem 2 we obtain the following necessary condition for :

(17)

We now compare (16) and (17). The rate of expansion of the state space of the continuous system in open loop is nats per unit time, while for the discrete system is bits per unit time. Accordingly, in the case of (17) the controller must receive at least nats representing the initial state during a time interval of average length . Similarly, in the case of (16) the controller must receive at least bits representing the initial state over a time interval whose average length corresponds to the average number of trials before the first successful reception

(18)

The works [7, 8, 9, 10, 11, 12] use event-triggerred policies that exploit timing information for stabilization over a digital communication channel. However, most of these event triggering policies encode information over time in a specific state-dependent fashion. Our framework generalizes this idea to provide a fundamental limit on the rate at which information can be encoded in time in Theorem 3. Theorem 4 achieves this limit in the case of exponentially distributed delays.

The authors of [7, 9] consider stabilization over a zero-delay digital communication channel. They showed that in this case using event triggering it is possible to achieve stabilization with any positive transmission rate, thus implicitly using the information in the timing. For channels without delay, an alternative policy to the one in [7, 9] could be to transmit a single symbol at time equal to any bijective mapping of into point of non-negative reals. For example, we could transmit at time for . The reception of the symbol would reveal the initial state exactly, and the system could stabilized.

The authors of [8] showed that when delay is positive, but sufficiently small, a triggering policy can still achieve stabilization with any positive transmission rate. However, as the delay increases past a critical threshold, the timing information becomes so much out-of-date that the transmission rate must begin to increase. In our case, since the capacity of our timing channel depends on the distribution of the delay, we may also expect that a large value of the capacity, corresponding to a small average delay, would allow for stabilization to occur using only timing information. Indeed, when delays are distributed exponentially and , from  (13) and (14) it follows that as longs as the expected value of delay is

(19)

it is possible to stabilize the system by using only the implicit timing information. The system is not stabilizable using only implicit timing information if the expected value of the delay becomes larger than .

5 Proofs

5.1 Proof of Theorem 2.

In this proof, we adapt arguments of [18] and [23] to our setup with timing channel, continuous-time system, and convergence in probability. We start considering the direct part of the proof. We first show that if , then we can ensure that in the estimation problem.

From (1), we have

(20)
(21)

It follows that:

(22)

then we also have

(23)

Now, since the estimator can always simulate internally the state of the control problem, it can choose and thus the error for the estimation problem will also converge to zero in probability

We now consider the converse part of the proof. If in the estimation problem it is also possible to design a controller that ensures .

Since the controller is aware of the control input , it can construct as

(24)

where is the best estimate of up to time , where . Subtracting (24) from (20) and accounting for (21), we have

(25)

It follows that to drive the estimation error at the controller to zero it is sufficient to estimate the initial condition with an error that vanishes exponentially. This is equivalent to estimating the system in (9) with vanishing error.

What remains to be proven is that there exists a controller that ensures . In the following, we assume without loss of generality that . We also assume that , since for the timing capacity is infinite, and both estimation and control problems can be solved by communicating the initial state with arbitrary precision over an arbitrarily small time interval. When , as tend to infinity tend to infinity, thus by (3) tend to infinity. With these assumptions, we choose a number so large that , and let in (1). From (1) we have

(26)

By solving (26) and using the triangle inequality, we get

(27)

The argument proceeds by showing that since , the first term in (5.1) must go to zero, and since the estimation error converges to zero the second term must go to zero too.

For any , since and , for large enough, say , we have that .

Since , we also have that for any for large enough n, say , we have  . Therefore, for we have that

(28)

is true with probability at least . Since and for all time , we have that for all , . Thus, for from (28) we have

(29)

with probability at least . Hence, by rewriting (29) we have

(30)

with probability at least . By letting , tend to and the result follows.

5.2 Proof of Theorem 3

We introduce a few definitions and useful lemmas before the beginning of the proof.

5.2.1 Rate-distortion function

For any and we define the rate-distortion function of the source at times as

(31)

The proof of the following lemma adapts an argument of [23] to our continuous-time setting.

Lemma 1.

We have

(32)
Proof.

Let

(33)

Using the chain rule, we have

(34)

Given and , there is no uncertainty in , hence we deduce

(35)

Since , , and , it then follows that

(36)

Since conditioning reduces the entropy, we have

(37)

By (33

) and since the uniform distribution maximizes the differential entropy among all distributions with bounded support, we have

(38)

Since , we have

(39)

Combining (38), and (39) we obtain

Finally, noting that this inequality is independent of the result follows. ∎

5.2.2 A remark on the entropy rate

By letting in (32), we have

(40)

where

(41)

For sufficiently small we have that , and hence

(42)

It follows that for sufficiently small the rate distortion per unit time of the source must be at least as large as the entropy rate of the system. Since the rate distortion represents the number of bits required to represent the state process up to a given fidelity, this provides an operational characterization of the entropy rate.

5.2.3 Mutual information

The proof of Lemma 2 follows the converse argument of [13] with minor modifications.

Lemma 2.

Under the same assumptions as Theorem 3, we have

(43)
Proof.

We denote the transmitted message by and the decoded message by . Then

(44)

is a Markov chain. Therefore, using the data-processing inequality 

[39] we have

(45)

By the chain rule for the mutual information, we have

(46)

Since is uniquely determined by the encoder from , using the chain rule we deduce

(47)

In addition, again using the chain rule we have

(48)

is conditionally independent of when given . Thus:

(49)

Combining (47), (48), and (49) it follows that

(50)

Since the sequences and are i.i.d. and independent of each other, it follows that the sequence is also i.i.d., and we have

(51)

By combining  (45), (46), (50) and (51) the result follows. ∎

5.2.4 The necessary condition

We are now ready to finish the proof of Theorem 3.

Proof.

By the assumption of the theorem, for any we have

(52)

Hence, for any and any there exist such that for

(53)

Using (53), (31), and Lemma 1 it follows that for

(54)

By (31), we have

(55)

and using Lemma (2) it follows that

(56)

Combining (54), (55), and (56) we deduce that for

(57)

We now let , so that , and we have

(58)

Since, from (3) it follows that

(59)

Since for all the measurement times satisfying (59), we let in (58) and the result follows. ∎

5.3 Proof of Theorem 4

Proof.

If the timing capacity is infinite, and the result is trivial. Hence, for the rest of the proof assume that

(60)

When , as tend to infinity tend to infinity, hence by (3) tend to infinity. We would like to construct a codebook and estimator such the estimation error goes to zero in probability i.e., for all , we want

Our achievability scheme relies on the capacity-achieving code construction from [13, Theorem 8].

We construct a series of random codebooks. The codeword symbols, are generated using a random codebook, such that each is drawn i.i.d. from a mixture of a delta function and an exponential. Following Theorem 3 from [13], it suffices to choose such that , and

First, we must bound the error event that the symbol does not arrive before the estimation deadline . We know that . Hence, for large enough , we have that for some . Hence, for large enough , we have that

(61)

Tail bounds on the sum of exponential random variables are investigated in [40]. Since are drawn i.i.d. from exponential distribution and are drawn i.i.d. from a mixture of a delta function and an exponential, is the sum of exponentially distributed random variables. Consequently, using the upper bound provided in [40] we have

(62)

where is a positive constant. Hence we can use a union bound to bound the probability of interest as:

(63)

Now we focus on the second term in (63). We can define , where is the best estimate of at time . Thus, it suffices to focus on estimating accurately.

Our messages are the centroids of quantization cells from a uniform quantization of the interval . Each codeword in the codebook maps to the centroid of a quantization cell. Our aim is to ensure that the quantization error decays faster than . The length of each bin is given by . Thus the desired decay is achieved if we can construct codebooks such that as i.e. as .

Now consider the ratio of interest:

(64)

where the inequality is true from the bound on the expected transmission time of the last symbol.

Since we know the timing capacity of the channel is lower-bounded by , we know that the codes are such that

(65)

Hence, taking limits in (64) gives us that from (3), which concludes the proof.

6 Conclusions

Recently, it has been shown that event triggering policies, encoding information over time in a state-dependent fashion can exploit timing information for stabilization over a digital communication channel. In a more general framework, this paper studied the fundamental limitation of using timing information for stabilization, independent of any transmission strategy. We showed that for stabilization of an undisturbed scalar linear system over a channel with a unitary alphabet, the timing capacity should be at least as large as the entropy rate of the system. In addition, in the case of exponentially distributed delay, we provided a tight sufficient condition too. Important open problems for future research include the effect of system disturbances, understanding the combination of timing information and packets with payload, and extensions to vector systems.

Acknowledgements

This research was partially supported by NSF award CNS-1446891. A part of this work was done while M. J. Khojasteh was visiting Microsoft Research. The authors would like to thank Serdar Yuksel and Girish Nair for helpful discussions.

References

  • [1] K.-D. Kim and P. R. Kumar, “Cyber–physical systems: A perspective at the centennial,” Proceedings of the IEEE, vol. 100 (Special Centennial Issue), pp. 1287–1308, 2012.
  • [2] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results in networked control systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 138–162, 2007.
  • [3] S. Yüksel and T. Başar, Stochastic Networked Control Systems: Stabilization and Optimization under Information Constraints.   Springer Science & Business Media, 2013.
  • [4] A. S. Matveev and A. V. Savkin, Estimation and control over communication networks.   Springer Science & Business Media, 2009.
  • [5] M. Franceschetti and P. Minero, “Elements of information theory for networked control systems,” in Information and Control in Networks.   Springer, 2014, pp. 3–37.
  • [6] B. G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans, “Feedback control under data rate constraints: An overview,” Proceedings of the IEEE, vol. 95, no. 1, pp. 108–137, 2007.
  • [7] E. Kofman and J. H. Braslavsky, “Level crossing sampling in feedback stabilization under data-rate constraints,” in 45st IEEE Conference on Decision and Control (CDC).   IEEE, 2006, pp. 4423–4428.
  • [8] M. J. Khojasteh, P. Tallapragada, J. Cortés, and M. Franceschetti, “The value of timing information in event-triggered control,” arXiv preprint arXiv:1609.09594, 2016.
  • [9] Q. Ling, “Bit rate conditions to stabilize a continuous-time scalar linear system based on event triggering,” IEEE Transactions on Automatic Control, 2016.
  • [10] M. J. Khojasteh, P. Tallapragada, J. Cortés, and M. Franceschetti, “Time-triggering versus event-triggering control over communication channels,” in 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Dec 2017, pp. 5432–5437.
  • [11] S. Linsenmayer, R. Blind, and F. Allgöwer, “Delay-dependent data rate bounds for containability of scalar systems,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 7875–7880, 2017.
  • [12] M. J. Khojasteh, M. Hedayatpour, J. Cortes, and M. Franceschetti, “Event-triggered stabilization of disturbed linear systems over digital channels,” arXiv preprint arXiv:1801.08704, 2018.
  • [13] V. Anantharam and S. Verdu, “Bits through queues,” IEEE Transactions on Information Theory, vol. 42, no. 1, pp. 4–18, 1996.
  • [14] A. S. Bedekar and M. Azizoglu, “The information-theoretic capacity of discrete-time queues,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 446–461, 1998.
  • [15] T. J. Riedl, T. P. Coleman, and A. C. Singer, “Finite block-length achievable rates for queuing timing channels,” in Information Theory Workshop (ITW), 2011 IEEE.   IEEE, 2011, pp. 200–204.
  • [16] A. B. Wagner and V. Anantharam, “Zero-rate reliability of the exponential-server timing channel,” IEEE Transactions on Information Theory, vol. 51, no. 2, pp. 447–465, 2005.
  • [17] S. Fang, J. Chen, and I. Hideaki, Towards integrating control and information theories.   Springer, 2017.
  • [18] S. Tatikonda and S. Mitter, “Control under communication constraints,” IEEE Transactions on Automatic Control, vol. 49, no. 7, pp. 1056–1068, 2004.
  • [19] G. N. Nair and R. J. Evans, “Stabilizability of stochastic linear systems with finite feedback data rates,” SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 413–436, 2004.
  • [20] J. Hespanha, A. Ortega, and L. Vasudevan, “Towards the control of linear systems with minimum bit-rate,” in Proc. 15th Int. Symp. on Mathematical Theory of Networks and Systems (MTNS), 2002.
  • [21] P. Minero, M. Franceschetti, S. Dey, and G. N. Nair, “Data rate theorem for stabilization over time-varying feedback channels,” IEEE Transactions on Automatic Control, vol. 54, no. 2, p. 243, 2009.
  • [22] P. Minero, L. Coviello, and M. Franceschetti, “Stabilization over Markov feedback channels: the general case,” IEEE Transactions on Automatic Control, vol. 58, no. 2, pp. 349–362, 2013.
  • [23] S. Tatikonda and S. Mitter, “Control over noisy channels,” IEEE transactions on Automatic Control, vol. 49, no. 7, pp. 1196–1201, 2004.
  • [24] A. S. Matveev and A. V. Savkin, “An analogue of shannon information theory for detection and stabilization via noisy discrete communication channels,” SIAM journal on control and optimization, vol. 46, no. 4, pp. 1323–1367, 2007.
  • [25] ——, “Shannon zero error capacity in the problems of state estimation and stabilization via noisy communication channels,” International Journal of Control, vol. 80, no. 2, pp. 241–255, 2007.
  • [26] G. Nair, “A non-stochastic information theory for communication and state estimation,” IEEE Transactions on Automatic Control, vol. 58, pp. 1497–1510, 2013.
  • [27] A. Sahai and S. Mitter, “The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link. Part I: Scalar systems,” IEEE transactions on Information Theory, vol. 52, no. 8, pp. 3369–3395, 2006.
  • [28] G. Como, F. Fagnani, and S. Zampieri, “Anytime reliable transmission of real-valued information through digital noisy channels,” SIAM Journal on Control and Optimization, vol. 48, no. 6, pp. 3903–3924, 2010.
  • [29] R. Ostrovsky, Y. Rabani, and L. J. Schulman, “Error-correcting codes for automatic control,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 2931–2941, 2009.
  • [30] R. T. Sukhavasi and B. Hassibi, “Linear time-invariant anytime codes for control over noisy channels,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 3826–3841, 2016.
  • [31] A. Khina, W. Halbawi, and B. Hassibi, “(Almost) practical tree codes,” in Information Theory (ISIT), 2016 IEEE International Symposium on.   IEEE, 2016, pp. 2404–2408.
  • [32] P. Minero and M. Franceschetti, “Anytime capacity of a class of markov channels,” IEEE Transactions on Automatic Control, vol. 62, no. 3, pp. 1356–1367, 2017.
  • [33] P. Tabuada, “Event-triggered real-time scheduling of stabilizing control tasks,” IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1680–1685, 2007.
  • [34] W. P. M. H. Heemels, K. H. Johansson, and P. Tabuada, “An introduction to event-triggered and self-triggered control,” in 51st IEEE Conference on Decision and Control (CDC).   IEEE, 2012, pp. 3270–3285.
  • [35] P. Tallapragada and J. Cortés, “Event-triggered stabilization of linear systems under bounded bit rates,” IEEE Transactions on Automatic Control, vol. 61, no. 6, pp. 1575–1589, 2016.
  • [36] J. Pearson, J. P. Hespanha, and D. Liberzon, “Control with minimal cost-per-symbol encoding and quasi-optimality of event-based encoders,” IEEE Transactions on Automatic Control, vol. 62, no. 5, pp. 2286–2301, 2017.
  • [37] Q. Ling, “Bit rate conditions to stabilize a continuous-time linear system with feedback dropouts,” IEEE Transactions on Automatic Control, 2017.
  • [38] G. N. Nair, R. J. Evans, I. M. Mareels, and W. Moran, “Topological feedback entropy and nonlinear stabilization,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1585–1597, 2004.
  • [39] T. M. Cover and J. A. Thomas, Elements of information theory.   John Wiley & Sons, 2012.
  • [40] S. Janson, “Tail bounds for sums of geometric and exponential variables,” arXiv preprint arXiv:1709.08157, 2017.