I Introduction
The demand for new and improved control techniques over unreliable communication links is constantly growing, due to the rise of emerging opportunities in the Internet of Things realm, as well as due to new and surprising applications in Biology and Neuroscience.
One of the most widely studied such networked control system (NCS) setups is that of control over discretized packeted communication channels [1, 2, 3, 4, 5]. This setup can be further divided into two regimes: fixedrate feedback—where exactly bits can be noiselessly conveyed from the observer/encoder to the controller/decoder [6, 7] (see Fig. 1), and variablerate feedback—where bits are available on average and the observer/encoder can decide how many bits to allocate at each time instant [8].
For each of these scenarios, additional information can be conveyed through eventtriggering by allowing to remain silent, i.e., not to send any information; see, e.g., [9, 10, 11, 12] and the references therein.
Although much effort has been put into determining the conditions for the stabilizability of such systems, less so has been done for determining the optimal attainable control costs—which are of great importance in practice—with several notable exceptions [13, 14, 15].
Other effects that are encountered in practice when using packetbased protocols are those of packet erasures (or packet drops) and delayed packet arrivals. Consequently, much attention has been devoted to studying the impact these effects have on the performance of networked systems in an idealized setup where the quantization rate is infinite [16, 3, 2].
A noteworthy effort to treat the case of finiterate packets with packet drops was made by Minero et al. [17]. To that end, they considered an even more general case where a timevarying rate “budget” (see Fig. 1) is provided at every time step, and is determined and revealed just before transmission; a packet erasure corresponds to a zerorate budget, implying that this scenario encompasses the packeterasure setting.
In this work, we construct algorithms for the setting of timevarying feedback rate budget, presented in Sec. II, along with its important special case of fixedrate feedback.
However, in contrast to the works of Minero et al. [17] and Yüksel [7], which concentrated on the conditions for system stabilizability using adaptive uniform and logarithmic quantizers,^{1}^{1}1It is impossible to stabilize an unstable system using fixedrate static quantization if the distributions of the disturbances or the initial state have unbounded supports [4, Sec. IIIA]. respectively, we attempt to optimize the control cost.
To that end, we concentrate our attention in Sec. III
on the class of disturbances that have logarithmicallyconcave (logconcave) probability density functions (PDFs) (the Gaussian PDF being an important special case), for which the Lloyd–Max algorithm
[18, Ch. 6] is known to converge to the optimal quantizer [19, 20, 21].^{2}^{2}2Assuming contiguous cells; see Rem. 7. Using Lloyd–Max quantization at every step, proposed previously by Bao et al. [22] and by Nakahira [23] (albeit without any optimality claims), and proving (a la sequential Bayesian filtering [24]) that the resulting system state—which is composed of the scaled sums of quantization errors of the previous steps and the new disturbances—continues to have a logconcave PDF, leads us to an optimal greedy algorithm.To support rates below one bit per sample, we extend the algorithm to the eventtriggered control scenario in Sec. IV. By adding another cell that corresponds to “silence” and constraining the probability of this cell to a minimal value, we are able to control the average rate of the scheme (which is equal to the sum of the probabilities of the remaining cells).
To tackle the more challenging task of designing a globally optimal quantizer, we recast the problem as that of designing an optimal quantizer for the problem of sequential coding of correlated sources [25] (see also [26, 27] and references therein).
An extreme variant of this problem is provided by that of linear quadratic regulator (LQR) control, in which the only randomness in the system happens in the intial state (which is again assumed to have logconcave PDF). We show in Sec. V that this problem is equivalent to that of successive refinement [28], which can be regarded as a special case of sequential coding of correlated sources.
Surprisingly, for the latter, a computationally plausible variant of the Lloyd–Max algorithm exists [29] that is known to achieve globally optimal performance for logconcave functions [30].2 Furthermore, using the classical Bennett approximated quantization law [18, Ch. 6.3], [31], we argue, in Sec. VC, that in the limit of high rates, the greedy algorithm is in fact optimal.
Although greedy optimization was demonstrated to be suboptimal [32] (outside of the highrate regime), simulations for the LQR case show that the gain of the globally optimal algorithm over the optimal greedy one is modest even at low rates (for which the gain is expected to be the largest). This, in turn, suggests that the optimal greedy algorithm will remain close in performance to the optimum for the more general case where the state is driven by i.i.d. logconcave disturbances, which includes linear quadratic Gaussian (LQG) control.
Ii Problem Setup
In this work, we consider the control–communication setup depicted in Fig. 1. We use a discretetime model spanning the time interval for , where for , such that . The plant is a discretetime linear scalar stochastic system
(1) 
where are the system state, disturbance and control action at time , respectively. We consider two setups for the disturbance sequence :

Independent and identically distributed (i.i.d.): are i.i.d. according to a known logconcave PDF .

LQR: is distributed according to a known logconcave PDF ; for all .
We further denote the variance of
by and assume, w.l.o.g., that it has zero mean.Definition 1 (Logconcave function; see [33]).
A function is said to be logconcave if its logarithm is concave:
(2) 
for all and ; we use the extended definition that allows to assign zero values, i.e., is an extended realvalue function that can take the value .
Remark 1.
The Gaussian PDF is a logconcave function over and constitutes an important special case.
We assume the observer has perfect access to at time . However, in contrast to classical control settings, the observer is not colocated with the controller and communicates with it instead via a noiseless channel of data rate . That is, at each time , the observer, which also takes the role of the encoder , can perfectly convey a message (or “index”) of bits, , of the past states, to the controller:
(3) 
where we denote and use the convention that for . We further set .
The controller at time , which also takes the role of the decoder , recovers the observed codeword and uses it to generate the control action
(4) 
The exact value of is revealed to the encoder prior to the computation of and is inferred by the decoder upon receiving . The statistics of impact system performance but do not affect the greedy optimality guarantees of the proposed algorithm of Sec. III.
Remark 2 (Packeterasure channel).
A packeterasure can be modeled by . Hence, the timevarying data rate model subsumes the packeterasure scenario [17].
Our goal is to minimize the following averagestage linear quadratic (LQ) cost upon reaching the time horizon :
(5a)  
(5b) 
where are the instantaneous costs
(6a)  
(6b) 
The weights and penalize the state deviation and actuation effort, respectively.
Iii Optimal Greedy Control
In this section we consider the i.i.d. disturbance setting. We recall the Lloyd–Max algorithm and its optimality guarantees in Sec. IIIA, which are subsequently used in Sec. IIIB to construct a greedy optimal control policy.
Iiia Quantizer Design
Definition 2 (Scalar quantizer).
A scalar quantizer of rate is described by an encoder and a decoder . We define the quantization operation as the composition of the encoding and decoding operations: .^{3}^{3}3The encoder and decoder that give rise to the same parameter are unique up to a permutation of the labeling of the index . The reproduction points are assumed to be ordered, without loss of generality:^{4}^{4}4If some inequalities are not strict, then the quantizer can be reduced to a lowerrate quantizer.
(7) 
We denote by the collection of all points that are mapped to index (equivalently to the reproduction point ):
(8)  
(9) 
We shall concentrate on the class of regular quantizers, defined next.
Definition 3 (Regular quantizer).
A scalar quantizer is regular if every cell , is a contiguous interval that contains its reproduction point :
(10) 
where is the set of partition levels—the boundaries of the cells. Hence, a regular scalar quantizer can be represented by the input partitionlevel set and the reproductionpoint set . We further take and to be the leftmost and rightmost values of the support of the source’s PDF.
The cost we wish to minimize is the mean squared error distortion between the source with a given PDF and its quantization :
(11a)  
(11b) 
Denote by the minimal achievable distortion ; the optimal quantizer is the one that achieves .
Remark 3.
We shall concentrate on logconcave PDFs , which are therefore continuous [33]. Hence, the inclusion or exclusion of the boundary points in each cell does not affect the distortion of the quantizer, meaning that the boundary points can be broken systematically.
Remark 4.
If the input PDF has an infinite/semiinfinite support, then the leftmost and/or rightmost intervals of the quantizer are open ( and/or take infinite values).
The optimal quantizer satisfies the following necessary conditions [18, Ch. 6.2].
Proposition 1 (Centroid condition).
For a fixed partitionlevel set (fixed encoder), the reproductionpoint set (decoder) that minimizes the distortion (11) is
(12) 
Proposition 2 (Nearest neighbor condition).
For a fixed reproductionpoint set (fixed decoder), the partitionlevel set (encoder) that minimize the distortion (11) is
(13) 
where the leftmost/rightmost boundary points are equal to the smallest/largest values of the support of .
The optimal quantizer must simultaneously satisfy both (12) and (13); iterating between these two necessary conditions gives rise to the Lloyd–Max algorithm.
Algorithm 1 (Lloyd–Max quantization).
Initial step. Pick an initial partitionlevel set .
Props. 1 and 2 suggest that the distortion at every iteration decreases; since the distortion is bounded from below by zero, the Lloyd–Max algorithm is guaranteed to converge to a local optimum.
Unfortunately, multiple local optima may exist in general (e.g., Gaussian mixtures with well separated components), rendering the algorithm sensitive to the initial choice .
IiiB Controller Design
We now describe the optimal greedy control policy. To that end, we make use of the following lemma that extends the separation principle of estimation and control to networked control.
Lemma 1 (Control–estimation separation [34], [13]).
Consider the general cost problem (5) with independent disturbance elements of variances . Then, the optimal controller has the structure
(14) 
where
(15) 
is the optimal LQR control gain and , and satisfies the dynamic Riccati backward recursion [35]:
(16) 
with and . Moreover, this controller achieves a cost of^{5}^{5}5Recall that and for the definition of , as no transmission or control action are performed at time .
(17) 
with
(18) 
Remark 5.
Lem. 1 holds true for any memoryless channel, with , where is the channel output at time .
The optimal greedy algorithm minimizes the estimation distortion at time , without regard to its effect on future distortions. To that end, at time , the encoder and the decoder calculate the PDF of conditioned on , via sequential Bayesian filtering [24], and apply the Lloyd–Max quantizer to this PDF. We refer to and to as the prior and posterior PDFs, respectively.
Algorithm 2 (Optimal greedy control).
Initialization. Both the encoder and the decoder set

as in Lem. 1, for the given , , and .

.

The prior PDF: .
Observer/Encoder. At time :

Observes the current state .

Runs the Lloyd–Max algorithm (Alg. 1) with respect to the prior PDF to obtain the quantizer of rate ; denote its partition and reproduction sets by and , respectively, and the cell corresponding to —by .

Transmits the quantization index .

Calculates the posterior PDF :
(20) where^{6}^{6}6We use here the regularity assumption.
is a normalization factor.
Controller/Decoder. At time :

Receives the index .

Reconstructs the quantized value: .

Generates the control actuation
(22)
Theorem 2.
The following is an immediate consequence of the logconcavity of the Gaussian PDF.
Corollary 1.
Let be a Gaussian PDF. Then, Alg. 2 is the optimal greedy control policy.
Recall that the Lloyd–Max Algorithm converges to the global minimum for logconcave PDFs. Consequently, in order to prove Thm. 2, it suffices to show that all the prior PDFs are logconcave. This, in turn, relies on the following logconcavity properties.
Assertion 1 (Logconcave function properties [33]).
Let and be logconcave functions over . Then, the following are also logconcave functions:

Affinity: for any constants .

Truncation: for any interval , possibly (semi)infinite.

Convolution: .
Now we are ready to prove Thm. 2.
Proof:
We use mathematical induction to show that both of the following conditions hold for any time :

The prior PDF is logconcave in for any realization .
Basic step (). From the initial condition , the optimal control action for is , and hence . Since is logconcave from the model assumption, also has a logconcave PDF, yielding Cond. (i). Consequently, the quantizer generated by the Lloyd–Max algorithm and the controller minimizes the instantaneous cost , yielding Cond. (ii).
Inductive step. Assuming Conds. (i)(ii) hold at time , we show below that they also hold at time . By the induction hypothesis, is logconcave. Consequently, by Thm. 1, the Lloyd–Max Algorithm generates the quantizer that minimizes the cost . This leads to Cond. (ii).
It only remains to show that Cond. (i) holds. Since logconcavity is preserved under truncation and affinity, and is logconcave by the induction hypothesis, the posterior PDF of (20) is also logconcave for any realization of ; this, along with the logconcavity of and the logconcavity preservation under affine transformations and convolution, guarantees the logconcavity of the next prior (21), , and completes the proof. ∎
Iv Eventtriggered Control
In this section, we extend the greedy algorithm to the eventtriggered control scenario. Under this scenario, the encoder may either send a packet of a fixed predetermined rate or avoid transmission altogether. Avoiding transmission helps alleviating network congestion by conveying information “by silence”.
We concentrate on the case of packets of a single bit, as in this regime the advantage of the algorithm is most pronounced and the exposition of the algorithm is the simplest. The two cells corresponding to the singlebit packet along with the silence symbol form a threelevel algorithm. We add a constraint on the minimal probability of the silent symbol; clearly, the average transmission rate is equal to in this case. To minimize the average transmission rate, the silence symbol needs to be assigned to the cell with the maximal probability:
(23) 
where the cellindex that achieves the maximum in (23) corresponds to the silent cell; we denote this index by .
Hence, the standard Lloyd–Max quantizer of Alg. 1 in each time step should be replaced by the following algorithm, which first checks whether standard threelevel Lloyd–Max quantization satisfies the constraint (23) and, if not, runs the algorithm with the constraint (23) imposed on a different cell each time, and chooses the one that achieves minimal average distortion. With the constraint imposed on a particular cell, the algorithm iterates between two steps: choosing the optimum for a fixed and choosing the optimum for a fixed . The first step is the same as the standard LloydMax quantizer. For the second step, the Karush–Khun–Tucker (KKT) conditions are employed [36, Ch. 5].
Algorithm 3 (Min. cellprobability constrained quantization).
Unconstrained algorithm. Apply Alg. 1. If the constraint (23) is satisfied for the resulting quantizer, use this quantization law. Else, set and to the smallest and largest values of the support of , and run the following.

.
Initial step. Pick an initial partitionlevel set .
Iterative step. Repeat the following steps

Fix and set as in (12),

Fix and set as in (13),

If does not satisfy the constraint (23), set , in accordance with the KKT conditions, as the solution of
(25a) (25b)
until the decrease in the distortion per iteration is below a desired accuracy threshold. Denote the resulting quantizer and distortion by and , respectively.


Set the quantizer to , where .
Replacing the Lloyd–Max quantizer of Alg. 1 with the constrained variant of Alg. 3 gives rise to the following eventtriggered variant of Alg. 2.
Algorithm 4 (Greedy eventtriggered control).
Initialization. Both the encoder and the decoder

Set .^{7}^{7}7Recall that we assume .
Observer/Encoder. At time :

Observes .

Runs Alg. 3 with respect to the prior PDF and the maximal probability constraint to obtain the quantizer ; denote its partition and reproduction sets by and , respectively, the index of the silent cell—by , and the cell corresponding to —by .

If , transmits the index ; otherwise, remains silent.
Controller/Decoder. At time :

Receives the index : in case of silence, recovers .

Reconstructs the quantized value: .

Generates the control actuation .
V Globally Optimal LQR Control
In this section, we study the LQR control setting, namely, the case where has a logconcave PDF and for all . Clearly, this is equivalent to the case of a random initial condition and for all , and is therefore referred to as LQR control.
We construct a globally optimal control policy in Sec. VB by connecting the problem to that of scalar successive refinement [29, 30], which is formulated and reviewed in Sec. VA. The resulting quantizers are commonly referred to as multiresolution scalar quantizers (MRSQs).
Va Successive Refinement
A step MRSQ successively quantizes a single source sample with PDF using a series of quantizers of rates : At stage , bits are available for the requantization of the source , and are encoded into an index . , along with all previous indices , is then used for the construction of a refined description .
Definition 4 (Mrsq).
A step MRSQ of rates is described by a series of encoders and a series of decoders , with and serving as the encoder and decoder at time , respectively. We define the quantization operation , at time step , as the composition of all the encodings until time with the decoding at time : .
This definition means that, although the overall effective rate of the quantizer at time is , only the last bits, corresponding to , are determined during time step . At the decoder, these bits are appended to the previously determined and received bits (corresponding to ), for the construction of a description of at time , .
Definition 5 (Regular MRSQ).
A step MRSQ is regular if the quantizer at each step is regular and the partitions of subsequent stages are nested, as follows. For each time :
(27) 
where is the partitionlevel set of the quantizer at time .
Remark 6.
The relation in (27) implies that, given , the partitions of all the previous stages can be deduced.
Remark 7 (Optimality of regular MRSQs).
Counterexamples for both discrete and continuous PDFs have been devised, for which regular MRSQs are strictly suboptimal [37, 38]. However, none such are known for the case of logconcave input PDFs [39]. Furthermore, we shall see that such quantizers become optimal in the limit of high rates in Sec. VC.
Our goal here is to design an MRSQ that minimizes the weighted timeaverage squared quantization error of an input with a given PDF and positive weights :
(28a)  
(28b) 
Unfortunately, greedyoptimal quantizers are not globally optimal in general [30, 32], since there might be a tension between optimizing and for . When such a tension does not exist, the source is said to be successively refinable [28], [40, Ch. 13.5.13].
We next present a Generalized Lloyd–Max Algorithm due to Brunk and Farvardin [29] for constructing MRSQs, which is in turn an adaptation of an algorithm for scalar multiple descriptions by Vaishampayan [41]. Similarly to the standard Lloyd–Max algorithm (Alg. 1), the generalized variant iterates between structuring the reproduction point sets given the partition (recall Rem. 6), and vice versa.
Furthermore, the centroid condition of Prop. 1 remains unaltered, as it does not have any direct effect on other stages, and is calculated separately for each stage. The partition of earlier stages, on the other hand, has a direct effect on the boundaries of newer stages, due to the nesting property (27). Consequently, the nearest neighbor condition of Prop. 2 is replaced by a weighted variant [29, 41].
Proposition 3 (Weighted nearest neighbor).
The optimal partition for a given sequence of reproductionpoint sets is determined by the weighted nearest neighbor condition:
(29a)  
(29b)  
for , where  
(29c)  
(29d)  
(29e) 
Remark 8.
Similarly to the optimal onestage quantizer of Sec. IIIA, the optimal MRSQ has to satisfy both the centroid condition of Prop. 1 and the weighted nearest neighbor condition of Prop. 3, simultaneously. Furthermore, iterating between these conditions gives rise to the Generalized Lloyd–Max algorithm.
Algorithm 5 (Generalized Lloyd–Max).
Initial step. Pick an initial partition .
As in the standard Lloyd–Max algorithm, Alg. 5 may converge to different local minima for different initializations . And similarly, sufficient conditions can be derived for the existence of a unique local—and thus also global—minimum [30]. Logconcave PDFs satisfy these conditions, suggesting that Alg. 5 is globally optimal for such PDFs.
VB Controller Design
By Lem. 1, in order to construct a globally optimal control policy, we need to find a quantizer that minimizes
(30) 
The following simple result connects this problem with that of designing an MRSQ that minimizes (28).
Lemma 2.
Proof:
Recall that is given by the recursion
(33a)  
(33b) 
The corresponding explicit expression for in this case is
(34) 
This suggests, in turn, that the estimate of at time can be expressed as
(35a)  
(35b)  
(35c)  
(35d)  
(35e)  
(35f) 
which proves the relation in (32), where (35a) follows from the definition of , (35b) holds due to (34), (35c) follows from the definition of and the action being a function of , and (35f) holds by substituting the relation established in (35c) for , namely,
(36) 
We are now ready to present the globally optimal control policy for the LQR problem.
Algorithm 6 (Globally optimal LQR control).
Initialization. Both the encoder and the decoder:
Observer/Encoder. Observes . At time :

Generates the quantizer index: .

Transmits .
Controller/Decoder. At each time :

Receives .

Generates the description: .

Generates as in (32).

Generates the control actuation .
VC HighRate Limit
We now consider the high resolution case, viz. the case in which the rates are large.^{8}^{8}8The exact notion of a large rate will become clear in the sequel. We start by treating the case of a single rate ().
We follow the exposition in [18, Ch. 6.3] of Bennett’s approximated quantization law for a single target rate (recall Sec. IIIA).
For a large enough rate , and consequently small enough cell widths (except for maybe the extreme cells), the sum in (11b) can be approximated by a Riemann integral by defining a reproductionpoint PDF
(37) 
where is the number of reproduction points , and is the approximate number of points in for a small . In this limit, the size of cell is approximated by
(38) 
Theorem 5 (Bennett’s law).
An immediate consequence of this theorem is that, in the limit of high rates (), the source is successively refinable, since the approximation of (38) and (39) is tightened by each subsequent additional rate [30, Thm. 5].
Corollary 2 (Succesive refinability).
Remark 9.
Bennett’s law holds true for a wider class of PDFs and distortions; see Sec. VIID for further discussion.
Vi Numerical Calculations
Via Greedy LQG Control
We now evaluate the instantaneous costs (6) of Alg. 2 for a standard Gaussian i.i.d. disturbance sequence , , , and . These costs are depicted in Fig. 2 along with for all admissible transmit sequences . We compare them to the following upper and lower bounds, also depicted in Fig. 2, which are valid for the less restrictive case of variablerate feedback [18, Ch. 9.9], where the average rate across time is constrained by .