Stabilizing Error Correction Codes for Controlling LTI Systems over Erasure Channels

01/14/2022
by   Jan Østergaard, et al.
Aalborg University
0

We propose (k,k') stabilizing codes, which is a type of delayless error correction codes that are useful for control over networks with erasures. For each input symbol, k output symbols are generated by the stabilizing code. Receiving any k' of these outputs guarantees stability. Thus, the system to be stabilized is taken into account in the design of the erasure codes. Our focus is on LTI systems, and we construct codes based on independent encodings and multiple descriptions. The theoretical efficiency and performance of the codes are assessed, and their practical performances are demonstrated in a simulation study. There is a significant gain over other delayless codes such as repetition codes.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/22/2021

Stabilizing Error Correction Codes for Control over Erasure Channels

We propose (k,k') stabilizing codes, which is a type of delayless error ...
02/18/2020

Design of SEC-DED and SEC-DED-DAEC Codes of different lengths

Reliability is an important requirement for both communication and stora...
04/03/2020

Error Detection and Correction in Communication Networks

Let G be a connected graph on n vertices and C be an (n,k,d) code with d...
05/04/2021

Deep Extended Feedback Codes

A new deep-neural-network (DNN) based error correction encoder architect...
02/23/2018

Advantages of versatile neural-network decoding for topological codes

Finding optimal correction of errors in generic stabilizer codes is a co...
06/22/2020

Time-Variant Proof-of-Work Using Error-Correction Codes

The protocol for cryptocurrencies can be divided into three parts, namel...
02/18/2020

Impact of Fountain Codes on GPRS channels

The rateless and information additive properties of fountain codes make ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

There has been a vast amount of literature on networked control systems over erasure channels, cf. [29, 27, 33, 7, 24, 9, 21, 16, 6, 30, 2, 23, 5, 32, 22, 11, 3, 13, 18, 10, 8, 1]. In [27], it was shown that for a given unstable linear time invariant (LTI) system, there exists a critical limit on the packet dropout rate beyond which the system cannot be stabilized in the usual mean-square sense. To go beyond this critical limit, several techniques have been proposed ranging from error correction codes [24, 16] and multiple descriptions [13] to packetized predictive control [21] to name a few.

Assume the output of the plant is to be encoded and transmitted over a digital erasure channel, where packets are either completely lost or received without errors. To recover from erasures, error correction codes can be utilized [26, 16]. Error-correction codes are often designed with a certain loss rate of the channel in mind, and do not necessarily rely on the plant (exceptions include the work in [16] which tracks the plant state). For example, erasure channel codes, take source packets and outputs channel packets. If any of the channel packets are received, the original source packets can be completely recovered. If more than packets are received, the additional received data packets are not useful since they do not contain any further information about the plant state than what is already known. Finally, if less than packets are received, the source packets can generally not be recovered at all and all the transmitted information is in this case wasted.

Dec

Enc

Channel
Fig. 1: Noisy LTI system that is controlled over a digital channel [25].

An alternative to error correction codes are multiple descriptions [4]

, which combines source and channel coding. With multiple descriptions, the source is encoded into a number of descriptions, which are individually transmitted over the channel. There is no priority on the descriptions, and any subset of the descriptions can be jointly decoded to achieve a desired performance. Multiple descriptions were, for example, used for state-estimation in

[33] and combined with packetized predictive control in [13]. One of the problems with multiple descriptions is that it is generally very hard to design good multiple-description codes. Another problem is that the descriptions generally contains redundant information except in the limit of vanishing data rates or when used in the extreme asymmetric situation, where the descriptions are prioritized and a successive refinement scheme is obtained. If one is able to construct a successive refinement source coder, then it was shown in [31] and [20], that the layers in the successive refinement code can be combined with traditional error correction codes in order to obtain a (sub optimal) multiple-description code. It was recently shown that a combination of successive refinement and multiple descriptions with feedback becomes rate-distortion optimal under certain asymptotical conditions [15].

We will in this paper focus on discrete-time LTI plants, stationary Gaussian disturbances, Gaussian initial state, scalar-valued control inputs and sensor outputs. Thus, the plant state can have an arbitrary dimensionality but the control signal as well as the output of the plant are both scalar valued. For such a system, the minimal information rate required to guarantee stability and a desired performance (measured in terms of the variance of the plant output) was completely characterized in

[25] for the case of commmunications over error-free digital channels. An illustration of the system is shown in Fig. 1.

We show that simple stabilizing erasure codes can be obtained from properly designed independent encodings [15] or multiple descriptions [4]. Specifically, for a given LTI plant we design a stabilizing code such that when combining any descriptions of the code, the resulting is above a critical limit, which guarantees that the decoded control signal contains sufficient information to stabilize the plant. We show that simple codes based on independent encodings are asymptotically efficient for nearly stable plants. In general, for unstable plants, it is advantageous to use a design based on multiple descriptions. In a simulation study, we demonstrate that for the same sum-rate and delay, it is possible to achieve a significant gain in performance over that which is possible with repetition coding.

Fig. 2: Linear system that models the system of Fig. 1.

Ii Background

Let us begin by considering the networked control system presented in [25], and which is shown in Fig. 1. Here is an LTI plant that is open-loop unstable, is the scalar control input, and is the scalar sensor output of the plant. The external disturbance is denoted by and is an error signal that is related to the output performance. The plant output is to be encoded by the causal encoder , transmitted over the ideal noise-less digital channel, and then decoded by the causal decoder . The encoder-decoder pair also contains the controller. Thus, the output of the decoder is the control signal to the plant. For a fixed data rate of the coder, the performance will be measured by the variance of the output . We have the following linear input-output relationship through the plant :

(1)

It was shown in [25], that if the initial state and the external disturbances are arbitrarily colored but jointly Gaussian, then the optimal encoder-decoder pair constitute a linear system + noise. This implies that the system in Fig. 1 can be modelled by the linear system shown in Fig. 2. In this system, and are LTI systems, and is an additive white Gaussian noise, which simulates the coding noise due to source coding. In this equivalent form, we have the following relationship [25]:

(2)

where

indicates a one-step delay operator. The signal-to-noise ratio

of the system is defined as:

(3)

It was shown in [25], that for any proper LTI filters and that makes the system in Fig. 2 internally stable and well-posed, we have the following explicit expressions:

(4)
(5)
(6)
(7)

To find the optimal filters that minimizes subject to a constraint on , one needs to solve a convex optimization problem [25]. A lower bound on the minimal coding rate achievable when using optimal filters is given by:

(8)

It is clear from (4) that asymptotically as , , which shows that the minimum required for stability is , and the minimum rate required for stability is given by .

Iii Causal Coders

The encoder at time is a (possibly) time-varying causal one-to-many map, which at each time instance produces outputs, that is:

(9)

where denotes the th output of the encoder at time , and indicates that the encoder is only using the sequence of current and past plant outputs. The sequence

denotes side information. Thus, the encoder can be randomized via the side information, which for example allows one to obtain a stochastic encoder. The outputs of the encoder are discrete. However, by use of subtractive dithering techniques, the resulting reconstructed values at the decoder are continuous. With this, the quantizer can be modelled as an additive white noise source

[34].

Let denote the set of indices of the received descriptions at time . At each time instance, descriptions are produced and transmitted over the digital erasure channels. The set of causal decoders at time is . For a particular choice of decoder, say , the reconstructed signal at time is given by:

(10)

Section IV considers lower bounds on the coding rates based on Gaussian coding schemes. The operational data rates obtained when using a practical coding scheme is generally greater than these lower bounds. These operational issues regarding the stabilizing codes are treated in the longer version of the paper [14]

. In particular, since we are here focusing on the situation with a scalar output, we need to use scalar quantizers. It is well known that scalar quantizers suffers from at rate-loss compared to vector quantizers except at very low bit rates. In addition, we need to entropy encode the output of the quantizer to further reduce the bitrate. Since the entropy coder is operating on one sample at a time, it will generally not be possible to reach the entropy of the output.

Iv Stabilizing Error Correction Codes

We will first introduce some definitions, which we will be needing in the sequel.

Definition 1

We will denote by a linear system on the form shown in Fig. 2, which has coding rate and performance , where is given by (5).

Definition 2

A stabilizing code for the system produces descriptions such that using any of them is sufficient to stabilize the system.

To quantify the efficiency of a stabilizing code when used on a particular system , we will compare the sum-rate of the descriptions to the rate required for a single-description code to achieve the same performance as that obtained when using all descriptions (without erasures). In the linear Gaussian case, the efficiency can be assessed by simple means as shown in the definition below.

Definition 3

The efficiency of a stabilizing code for the system is defined as:

(11)

where is the when using any single description out of the descriptions, and is the when combining all descriptions.

When measuring efficiency in (11), we need to make sure that we compare the coding rates of systems having similar performance (in terms of ). The best performance of a stabilizing code is obtained when using all descriptions, which results in an of . The rate of each description is and the sum-rate is . On the other hand, when not using a stabilizing code we need a coding rate of to achieve an of .

For a classical error correction code that produces outputs for each input sample (or block of samples), the efficiency is , and the delay is samples (blocks). A repetition code that duplicates the same source block times has efficiency and zero delay. The stabilizing codes that we propose are also delayless and are able to improve upon the efficiency of repetition codes due to the property that descriptions can synergistically improve upon each other.

Iv-a Stabilizing codes based on independent encodings

Definition 4

Let . If and are conditionally independent given , then we refer to as independent encodings [15].

Lemma 1

Stabilizing Code Based on Independent Encodings. Consider the system , which is illustrated in Fig. 2. Let be Gaussian and let , be independent encodings of , where

are mutually independent, zero-mean Gaussian distributed, and having a common variance

. If for some , the common variance satisfies

(12)

where is given in (6), then , form a stabilizing code for the system .

The variance of is for any subset of encodings. The resulting , when combining descriptions, needs to satisfy:

(13)

since is the minimal required to guarantee stability. We now use that , and from (4) we get:

(14)

Inserting into (13) and re-arranging terms leads to:

(15)

which leads to (12).

The following lemma provides a lower bound on the sum-rate required for a stabilizing code based on independent encodings. We note that if one is not interested in the performance when receiving less than descriptions, then the sum-rate can generally be further reduced by use of distributed source coding techniques such as Slepian-Wolf coding [28]. However, at low coding rates, the bound becomes asymptotically optimal as is shown by Lemma 3.

Lemma 2

The minimum sum-rate of a stabilizing code based on independent encodings for the system is:

(16)

Let be the variance of the coding noise for a single description of the stabilizing code. Then, the resulting variance when linearly combining descriptions is . Thus, , where the inequality follows since is the minimum that guarantees stability. Isolating leads to:

(17)

We can now express the sum-rate in terms of , that is:

Lemma 3

Consider the system . The efficiency of a minimum sum-rate stabilizing code based on independent encodings is given by:

(18)

and the code is asymptotically efficient in the sense of:

(19)

The first part follows immediately from (17), since the for a single description is and for descriptions it is . The second part follows since the logarithm of the number is approximately linear in when , i.e., for small .

For a system, if it means that the system is stable. Thus, the second part of Lemma 3 considers the situation where the plant is either stable or nearly stable, i.e., the unstable poles are near the unit circle. In this case, the coding rates are arbitrary small, and the descriptions of the stabilizing code becomes mutually independent. Thus, there is no redundancy by using descriptions each of rate over a single description of rate [15].

Lemma 4

Consider the system . The performance (in terms of ) for this system when using descriptions of a minimum sum-rate stabilizing code based on independent encodings is:

(20)

Follows from (5) by inserting (15) and the fact that the noise variance satisfies for .

Iv-B Example 1

Consider a plant that provides the following input-output relationship between and :

(21)

where the external disturbance

has a standard normal distribution. Notice that the plant has an unstable pole at

, which implies that the minimum required for stability is , and equivalently the minimum coding rate is bit. For this plant, we can choose a particular and find the optimal filters and associated by using the method described in [25]. From these we can find the performance using (5) and coding rate . Changing leads to another set of variables and different performances and coding rates.

Let us now design a stabilizing code, so that receiving any descriptions implies that the minimum requirement is fulfilled. We choose so that the resulting is , when linearly combining descriptions. For , we obtain: , and for , respectively. For it is clear that the resulting is greater than the minimum of , and we therefore have a stabilizing code.

The coding rate per description is bits, and the sum-rate is bits. The coding rate required for a single-description system to achieve is bits. Thus, the efficiency is . For comparison, a repetition code with 4 descriptions would have an efficiency of .

In Fig. 3, we have illustrated the resulting when combining the descriptions as a function of . It can be seen that a stabilizing code can be obtained for dB, and a (4,2) stabilizing code for dB. Fig. 4 shows the efficiency as a function of . The efficiency is monotonically decreasing in . At very low which corresponds to small coding rates, the efficiency is highest.

Fig. 3: The resulting obtained with the stabilizing code of Example 1, when combining descriptions.
Fig. 4: The efficiency of the stabilizing code of Example 1 as a function of .

Iv-C Stabilizing codes based on multiple descriptions

It is possible to introduce correlation between the quantization noises of the encodings in Definition 4, which makes it possible to exploit the benefits of multiple descriptions. Of course, zero correlation is a special case of multiple descriptions, which is usually referred to as the no excess marginal rate case [35]. When introducing correlation, the sum-rate is no longer simply just given by the sum of the optimal marginal (description) rates. The sum-rate also becomes a function of the amount of correlation introduced; the greater (negative) correlation, the greater sum-rate [17].

Lemma 5

Stabilizing Code Based on Multiple Descriptions. Consider the system , which is illustrated in Fig. 2. Let be Gaussian and let , where are zero-mean Gaussian distributed with variance , and pairwise correlated with correlations coefficient . If for some and , the common variance satisfies

(22)

where is given in (6), then , form a stabilizing code for the system .

We need to ensure that , when receiving at least descriptions. The noise variance when combining any descriptions is given by:

(23)

Using (23), the is given by:

(24)

Isolating and inserting (14) leads to:

(25)
(26)

Let be the common correlation coefficient between all noise pairs , and let be their common variance. If we are only interested in the performance when receiving descriptions or all descriptions, then the sum-rate can be explicitly expressed [19]:

(27)

where it is assumed the source is standard normal.

V Simulation Study

We consider the same system as that of Example 1, and we will assume i.i.d. packet losses. The encoder is informed about the packet loss probability but does not know when an erasure occur. Knowledge of the packet loss probability makes it possible to design an efficient entropy coder (lossless coder).

We will be using a subtractively dithered scalar quantizer, which is a stochastic quantizer that provides different outputs, when encoding the same source multiple times [34]. We will use this to form the independent encodings.

Fig. 5: The performance of stabilizing and repetition codes as a function of packet-loss probability.

We encode the output of Fig. 2 using a subtractively dithered scalar quantizer with step-size to obtain:

(28)

where denotes the dither signal. We choose the step-size so that the resulting when using only a single description is , which is below that required for stability. Combining any two descriptions yields and combining all three yields . Thus, using at least two descriptions is sufficient to stabilize the system. Based on this we design a stabilizing code, which for each input sample produces three outputs using the quantizer three times. The theoretical efficiency of this scheme is . In practice, we suffer from a rate loss due to using a scalar quantizer. The theoretical rate is . However, transmitting less than one bit per sample is only possible when encoding vectors. The measured entropy of the quantized output is bits per description.

We have plotted the performance of the stabilizing code in Fig. 5 as a function of the packet-loss probability. We assume i.i.d. packet losses, and simply average the received descriptions to form the reconstruction. Also shown is the performance when transmitting one of the descriptions three times. This corresponds to a repetition code having similar sum-rate as the stabilizing code. For each packet-loss probability, the performance and rates are averages over a realization having samples. It can be observed that using a stabilizing code is up till dB better than a repetition code at low packet-loss probabilities.

We also show in Fig. 5 the performance of a stabilizing code, which is compared to a repetition code. The is and , when using 1 or 2 descriptions, respectively, of the stabilizing code. The measured output entropy after scalar quantization is in this case bits per description, and the sum-rate is bits.

Finally, we design a stabilizing code based on multiple descriptions. It is not straight-forward to obtain correlated noises between the descriptions, and we use here the approach described in [12], which is based on nested lattices and index assignments. The source is first quantized using a fine-grained quantizer referred to as the central quantizer. Then, a one-to-many map is applied, which maps the quantized value to points in a nested (coarser) lattice. If all coarser points are received, the map is invertible and the point of the central quantizer is used for reconstruction. If less than descriptions are received, the reconstruction is given by the average of the received points in the coarser lattice [12]. We are using a nesting factor of 5, and the resulting pairwise correlation coefficient between the descriptions is . The of a single description is and that of two descriptions is , which is above the critical value for stability. The when all descriptions are used is . The step-size of the fine lattice is chosen such that the resulting bitrate is similar to that of the stabilizing code based on independent encodings.

It can be seen in Fig. 5 that stabilizing codes outperforms repetition coding. Moreover, using MD coding when constructing the stabilizing codes is better than using independent encodings, except at very low bitrates or very high packet-loss rates.

Vi Conclusions

A new construction of error correction codes were proposed, which takes the stability of the plant into account. For linear systems with scalar input and output, explicit designs were provided, and it was shown that there is a significant gain over using traditional repetition codes. Similar to repetition coding, the proposed codes do not add additional delays but operate on each sample at a time.

Vii Acknowledgments

The author would like to thank Mohsen Barforooshan for discussions related to simulating the control system.

References

  • [1] M. Barforooshan, M. Nagahara, and J. Østergaard (2020) Sparse packetized predictive control over communication networks with packet dropouts and time delays. In IEEE 58th Conference on Decision and Control (CDC), pp. 8272 – 8277. Cited by: §I.
  • [2] N. Elia and J.N. Eisenbeis (2011) Limitations of linear control over packet drop networks. IEEE Trans. Automat. Contr. 56 (4), pp. 826 – 841. Cited by: §I.
  • [3] A. Farhadi (2015-12) Stability of linear dynamic systems over the packet erasure channel: a co-design approach. International Journal of Control 88 (12), pp. 2488 – 2498. External Links: Document Cited by: §I.
  • [4] A. Gamal and T.M. Cover (1982) Achievable rates for multiple descriptions. IEEE Trans. Inf. Theory IT-28 (6), pp. 851 – 857. Cited by: §I, §I.
  • [5] E. Garone, B. Sinopoli, A. Goldsmith, and A. Casavola (2012) LQG control for mimo systems over multiple erasure channels with perfect acknowledgment. IEEE Transactions on Automatic Control 57 (2), pp. 450 – 456. Cited by: §I.
  • [6] V. Gupta, A. F. Dana, J. P. Hespanha, R. M. Murray, and B. Hassibi (2009) Data transmission over networks for estimation and control. IEEE Transactions on Automatic Control 54 (8), pp. 1807 – 1819. External Links: Document Cited by: §I.
  • [7] O. Imer, S. Yüksel, and T. Başar (2006) Optimal control of LTI systems over unreliable communication links. Automatica 42, pp. 1429 – 1439. Cited by: §I.
  • [8] A. Khina, V. Kostina, and A. K. B. Hassibi (2019-06) Tracking and control of Gauss-Markov processes over packet-drop channels with acknowledgments. IEEE Trans. Control of Network Systems 6 (2). Cited by: §I.
  • [9] G. Liu, Y. Xia, J. Chen, D. Rees, and W. Hu (2007) Networked predictive control of systems with random network delays in both forward and feedback channels. IEEE Trans. Ind. Electron. 54 (3), pp. 1282 – 1297. Cited by: §I.
  • [10] A. Maass, F. Vargas, and E. Silva (2016) Optimal control over multiple erasure channels using a data dropout compensation scheme. Automatica 68, pp. 155 – 161. Cited by: §I.
  • [11] M. Nagahara, D. Quevedo, and J. Østergaard (2014) Sparse packetized predictive control for networked control over erasure channels. IEEE Transactions on Automatic Control 59 (7), pp. 1899 – 1905. Cited by: §I.
  • [12] J. Østergaard, J. Jensen, and R. Heusdens (2006) -Channel entropy-constrained multiple-description lattice vector quantization. IEEE Transactions on Information Theory (5), pp. 1956 – 1973. Cited by: §V.
  • [13] J. Østergaard and D. Quevedo (2016-04) Multiple descriptions for packetized predictive control. EURASIP J. Adv. Signal Proc. 2016 (45). Cited by: §I, §I.
  • [14] J. Østergaard (2021) Stabilizing error correction codes for control over erasure channels. Note: Submitted to IEEE Transactions on Control of Network Systems. Draft available at: https://arxiv.org/abs/2112.11717 Cited by: §III.
  • [15] J. Østergaard, U. Erez, and R. Zamir (2020) Incremental refinements and multiple descriptions with feedback. Submitted to IEEE Transactions on Information Theory. Note: Electronically available on arxiv.org: https://arxiv.org/abs/2011.02747 Cited by: §I, §I, §IV-A, Definition 4.
  • [16] R. Ostrovsky, Y. Rabani, and L. J. Schulman (2009) Error-correcting codes for automatic control. IEEE Transactions on Information Theory 55 (7), pp. 2931 – 2941. External Links: Document Cited by: §I, §I.
  • [17] L. Ozarow (1980) On a source-coding problem with two channels and three receivers. Bell Syst. Tech. J. 59 (10), pp. 1909 – 1921. Cited by: §IV-C.
  • [18] E. Peters, D. Quevedo, and J. Østergaard (2016) Shaped Gaussian dictionaries for quantized networked control systems with correlated dropouts. IEEE Transactions on Signal Processing 64 (1), pp. 203 – 213. Cited by: §I.
  • [19] S.S. Pradhan, R. Puri, and K. Ramchandran (2004-01) -Channel symmetric multiple descriptions - part i: source-channel erasure codes. IEEE Trans. Inf. Theory 50 (1), pp. 47 – 61. Cited by: §IV-C.
  • [20] R. Puri and K. Ramchandran (1999) Multiple description source coding using forward error correction codes. In Asilomar Conf Signals Syst. Comput., Cited by: §I.
  • [21] D. E. Quevedo, E. I. Silva, and G. C. Goodwin (2007) Packetized predictive control over erasure channels. In 2007 American Control Conference, Vol. , pp. 1003 – 1008. External Links: Document Cited by: §I.
  • [22] D. Quevedo, J. Østergaard, and A. Ahlen (2014) Power control and coding formulation for state estimation with wireless sensors. IEEE Transactions on Control Systems Technology 22, pp. 413 – 427. Cited by: §I.
  • [23] D. Quevedo, J. Østergaard, and D. Nesic (2011) Packetized predictive control of stochastic systems over bit-rate limited channels with packet loss. IEEE Transactions on Automatic Control 56 (12), pp. 2854 – 2868. Cited by: §I.
  • [24] A. Sahai (2006-08) The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link - part i: scalar systems. IEEE Transactions on Information Theory 52, pp. 3369 – 3395. Cited by: §I.
  • [25] E. Silva, M. Derpich, J. Østergaard, and M. Encina (2016) A characterization of the minimal average data rate that guarantees a given closed-loop performance level. IEEE Transactions on Automatic Control 61 (8), pp. 2171 – 2186. Cited by: Fig. 1, §I, §II, §II, §II, §IV-B.
  • [26] R.C. Singleton (1964) Maximum distance q-nary codes. IEEE Trans. Inf. Theory 10 (2), pp. 116 – 118. Cited by: §I.
  • [27] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. Jordan, and S.S. Sastry (2004) Kalman filtering with intermittent observations. IEEE Trans. Autom. Control. Cited by: §I.
  • [28] D.S. Slepian and J.K. Wolf (1973) Noiseless coding of correlated information sources. IEEE Transactions on Information Theory 19 (4), pp. 471 – 480. Cited by: §IV-A.
  • [29] S. Tatikonda and S. Mitter (2004) Control over noisy channels. IEEE Trans. Automatic Control. Cited by: §I.
  • [30] M. Trivellato and N. Benvenuto (2010) State control in networked control systems under packet drops and limited transmission bandwidth. IEEE Trans. Communications 58 (2), pp. 611 – 622. Cited by: §I.
  • [31] R. Yeung and R. Zamir (1996) Multilevel diversity coding via successive refinement. In Proceedings of IEEE International Symposium on Information Theory, Cited by: §I.
  • [32] S. Yüksel and S. P. Meyn (2013)

    Random-time, state-dependent stochastic drift for markov chains and application to stochastic stabilization over erasure channels

    .
    IEEE Trans. Autom. Control. Cited by: §I.
  • [33] R. M. Z. Jin (2006) State estimation over packet dropping networks using multiple description coding. IEEE Trans. Autom. Control 42 (9), pp. 1441 – 1452. Cited by: §I, §I.
  • [34] R. Zamir (2014) Lattice coding for signals and networks a structured coding approach to quantization, modulation and multiuser information theory. Cambridge University Press. Cited by: §III, §V.
  • [35] Z. Zhang and T. Berger (1995) Multiple description source coding with no excess marginal rate. IEEE Trans. Inf. Theory 41, pp. 349–357. Cited by: §IV-C.