Networked control systems share an inherent tension between the control performance and the resources that are allocated to communicate by different nodes of the system. Despite the great advances on important questions in this theme such as data rate theorems for stabilizability of dynamical systems [2, 3, 4, 5, 6, 7, 8, 9], there are still fundamental questions that remain open such as the trade-off between communication resources and the control cost [10, 11, 12, 13, 14, 15, 16]. In this paper, we investigate this question on a simple topology consisting of the classical Linear Quadratic Gaussian (LQG) setting with a single communication link.
The networked control setting investigated in this paper (Fig. 1) aims to reduce the achievable control cost at the expense of communication resources. The communication link introduced between an encoder and a decoder (co-located with the controller) serves as an information pipeline to the controller that also has an access to the LQG measurements . Based on its (full) observation of the state, the encoder transmits extra information to the controller resulting in a reduction in the LQG cost. One can also view this setting as the standard rate-constrained LQG setting , but with side information available to the controller (the measurement ) [18, 15, 19, 20]. The objective of this paper is to characterize the minimal communication resources subject to a strict constraint on the control performance measured by a quadratic cost.
The communication (information) resources are measured with the conditional directed information. The directed information is suitable for scenarios where the operations of the involved units are sequential, e.g., channels with feedback in communication [21, 22, 23] and the causal rate distortion function in the context of control problems [11, 14]. Also here, both mappings of the encoder and the controller are sequential and the directed information serves as a lower bound to the operational variable-length (prefix) coding problem [24, 11] (See also Section VI). The control performance is measured by a quadratic cost function of the state and control signals. The optimization problem is formulated for two scenarios corresponding to the finite-horizon and infinite-horizon regimes.
For the finite-horizon problem, time-varying linear dynamical systems are investigated and the minimal conditional directed information is formulated as a convex optimization problem. The optimization problem has a semidefinite programming (SDP) form (more precisely, max log-det
form) and can be implemented using standard solvers even for large horizons. We also show that the solution to the optimization problem can be realized by three design steps: controller gains computation, solution for the convex optimization problem and a standard Kalman filter. For the infinite-horizon problem where the dynamical system matrices are time-invariant, we show that the optimization problem can be also formulated as an SDP with the optimization variables being two positive semidefinite matrices of finite dimensions. Most importantly, we show that the optimal encoding policy is a simple, time-invariant Gaussian measurement of the state that can be computed from the convex optimization.
Our results generalize the work by Tanaka et al. , which introduced the SDP approach for solving control-communication problems . Specifically, we investigate the full LQG setting, while  assumed that the LQG measurement is absent ( in Fig. 1). Thus, the control performance in our setting relies on the fusion of both the communication link information and the LQG Gaussian measurement.
Two key changes in the SDP formulation are the objective function that includes a new term due to the study of conditional directed information rather than the directed information in , and a new linear matrix inequality (LMI) constraint which represents the error covariance reduction due to the LQG measurement. To find the optimal policy structure, we study a relaxed optimization problem where the LQG measurements are available to the encoder as well. We then show that even in this relaxed scenario, the optimal encoder signaling is a memoryless Gaussian measurement of the state. Thus, the knowledge of the LQG measurements at the encoder can not reduce the minimal communication resources. This extends the observation made in 
in the scalar setting for the vector one.
The problem of control under communication constraints with side information has recently attracted much interest [18, 19, 15, 26, 20]. In , a scalar version of the problem in Fig. 1 was solved. In , a slightly less general problem than Fig. 1 was considered. They conjectured that a linear, memoryless policy is optimal and provided a semidefinite programming solution. The conjecture and the SDP formulation are subsumed in the conference version of the current paper , published prior to . Additionally,  shows that the conditional directed information is within a constant gap from the operational problem of variable-length coding with side information available to the controller and the encoder. This is obtained by constructing a practical coding scheme and analyzing its performance. In , the rate-distortion counterpart of the control problem studied here is considered. It is shown that if the optimal policy is assumed to be linear and the LQG cost admits an upper bound at all times, a simple optimization problem can be realized for the corresponding rate-distortion problem. The result presented below in Theorem 1 confirms the optimality of the policy conjectured in  and of the linear policy assumed in . It should be remarked that the objective considered in [19, 15] and that in the current paper is the conditional directed information, which is a lower bound to the operational problem in the case of a fixed rate or in the case of a variable rate and prefix-free codebooks. In , it is shown that the directed information is a tighter lower bound but, it is also illustrated that a Gaussian policy does not attain its minimum and therefore, it is not clear whether a computable form of the directed information can be obtained. Finally,  studied coding schemes for the scalar LQG setting with a Gaussian communication channel based on the joint source channel schemes in [27, 28].
Ii The setting and problem definition
A linear dynamical system is described by
where are mutually independent. The initial state is distributed according to and is independent of . A noisy measurement of the state is available to the controller,
with . For a fixed time-horizon , the LQG quadratic cost is defined as
with and , and superscripts denote vectors starting at time , e.g., .
The objective is to design a system such that the LQG cost does not exceed a cost target denoted by . Naturally, if the measurements are sufficient to attain , the classical solution to the LQG problem is satisfactory, and there is no need to expand. In the other extreme, the LQG cost cannot be reduced below the LQG cost attained by a fully observer, i.e., . Our interest lies in the scenario where is below the optimal LQG cost attainable with the partial observer (2) but above the optimal LQG cost attainable with the full observer. In this case, the introduction of a communication/information link (see the dashed line in Fig. 1) between a full observer (encoder) and a controller (a decoder) will help to attain the desired LQG cost .
The encoder is characterized by the set of stochastic mappings that can be compactly represented by the causal conditioning
Similarly, the decoder (controller) is a causally conditioned probability distribution
By the construction, the encoder-decoder pair satisfies at all times
The overall joint distribution can be summarized using the one-step update
where is the mutual information between and conditioned on .
The objective of this paper is to solve the optimization problem:
where the minimum is over policies of the form (II).
When the measurement is absent, the optimization problem in (II) simplifies to the directed information that was investigated in [10, 17]. To see that the conditional directed information measures the information encapsulated at the encoding policy, assume that the -th element in the conditional directed information satisfies:
Then, the right hand side extracts the state uncertainty at the controller with and without the encoding variable , i.e., . Specifically, the difference reflects the fact that is costly while is a natural occurrence of the dynamical system without any cost. These arguments are formalized in Theorem 1 and Lemma 1. We will also show a relation between the optimal conditional directed information and the Kalman filtering theory with two independent measurements.
This section presents our results. First, we provide a simple structure for the optimal policy in Theorem 1. Then, we present preliminaries on Kalman filtering theory to express the directed information in its terms. We then provide a semidefinite programming formulation of the optimization problem and present the optimal system design. Finally, Section III-E includes the formulation and the solution for the infinite-horizon problem.
Iii-a Optimal policy structure
The first result is the optimal structure of the observer (encoder) and controller (decoder) policies:
Theorem 1 (Optimal policy structure).
An optimal policy for the optimization problem in (II) is given by
where is independent from and is a constant given by the LQR controller (see (III-D1), below).
Moreover, the knowledge of the measurements at the encoder does not reduce the optimal directed information control problem in (II).
The theorem simplifies significantly the maximization domain from the general policy in (II) to the set . The encoding rule reveals that
reduces the communication resources by introducing an additive noise to the state observation. We emphasize that our problem formulation does not impose any structural constraints onto the encoding policy such as linear, memoryless, or following a Gaussian distribution. The control signalis the standard LQG certainty equivalence controller. Thus, similar to the scalar case in 
, the separation between the control gain and the estimation is preserved in our setting. The proof of Theorem1 appears in Section V.
Theorem 1 extends [17, Th. ] and recovers it when , the observation, is absent. The extension of  to our setting is not trivial (see e.g., [19, 15] for progress on that problem), and involves the study of a relaxed optimization problem where, at time , the vector is also available to the encoder. For this relaxed optimization problem, we show that the optimal policy is of the form (1). In other words, even if the side information is available at the encoder, it cannot reduce the conditional directed information. This is consistent with the observation made in  in the context of the scalar system.
Iii-B Kalman filter with two (independent) measurements
As is evident from the optimal structure in Theorem 1, the encoding function is a noisy measurement of the system state, and its additive noise is independent of the other measurement . Thus, the optimal system has a structure of an LQG setting with two independent observations. However, for the purpose of optimizing the communication resources, has a cost, while is a natural occurrence of the system. In this section, we provide short preliminaries on Kalman filtering and present the conditional directed information in Kalman filtering terms.
Following a standard convention, we denote the error covariance matrices with respect to both measurements and as
Since the communication resources should be measured with respect to the observation only, we define the intermediate error covariance matrix corresponding to the prediction error after observing only:
The following lemma formalizes several relations between the error covariances.
Lemma 1 (Error covariance matrices).
Let be the covariance matrix of . Then, for a fixed policy , the error covariance matrices can be updated as
where , and .
The identities are standard in Kalman filtering theory, and their proofs are omitted. It now follows that the directed information can be expressed as
Note that the matrix is the multiplicative term of the error reduction when computing from . Therefore, the conditional directed information measures the reduction in error covariance with respect to only, as desired.
Iii-C SDP formulation
Despite the elegant representation of the objective function in (III-B), it is not clear whether (II) can be formulated as a convex optimization since its inverse includes a product of two optimization variables . Our next result shows a convex optimization formulation for (II).
Theorem 2 (SDP formulation).
The optimization problem in Theorem 2 is convex optimization with respect to the decision variables , and can be solved using standard solvers, e.g., [30, 31, 32]111Some solvers require to write the determinant of in a symmetric form using Sylvester’s determinant theorem.. It will be shown in the proof of Theorem 2 in Section V below that the auxiliary decision variable evaluated at the optimal point is equal to . However, it is necessary to introduce this variable in order to convert the objective to have a standard convex form. Then, the equality constraint resulting from the change of variable can be (optimally) relaxed to an inequality that is equivalent to the LMI above. The optimization problem extends [17, Th. ] to the case where the LQG measurement is available to the controller, and recovers it by choosing . In this case, the constraints on simplify to and .
Iii-D System design
In this section, we construct a three-steps realizable policy using the results from the previous section..
Iii-D1 The controller gain
The controller gains are independent of the measurements and the variables from the optimization problem. The gains can be computed from a backward Riccati recursion, with the initial condition , as
Iii-D2 Covariance matrices
Iii-D3 Kalman filter
The Kalman gain is defined as
The Kalman update is done in two steps:
where the control signal is .
Iii-E The infinite-horizon setting
In this section, we formulate and solve the optimization problem (II) in the infinite-horizon regime. In this scenario, we consider time-invariant systems, i.e., , , , , and time-invariant cost matrices , . The optimization problem is defined as:
where the infimum is taken with respect to the sequence of stochastic policies given in (II).
The solution structure is similar to the finite-horizon solution in Theorem 2. In particular, we construct a controller based on a solution to a convex optimization problem. We begin with the controller description.
Iii-E1 Controller gain
Assume that is stabilizable and is observable on the unit circle. Then, we define to be the unique stabilizing solution for the Riccati equation
By having the stabilizing solution, we can present the SDP-based system design in the infinite-horizon regime.
If the pair is stabilizable and the pair is observable on the unit circle, the infinite-horizon optimization problem (III-E) can be formulated as the convex optimization
where is given in (23), and .
Theorem 3 shows that the optimization problem in the infinite-horizon regime is computationaly simpler than the finite-horizon regime solved in Theorem 2. In the proof of Theorem 3, Theorem 1 is used for the structure of the optimal policy, however, it is interesting to note that we also show that a time-invariant law is optimal while in Theorem 2 the optimal policy is time-varying. The main idea to show this property is the convexity of the objective. In particular, one can use Jensen’s inequality to show that the evaluation of the objective at the convex combination of the decision variables is smaller than the averaged sum of objectives at all times. This fact can be exploited in the infinite-horizon regime to show that the convex combination of the decision variables satisfies the stationary constraints presented in Theorem 3. The proof of Theorem 3 is given in Section V-C.
Iv-a Side information reduces the minimal directed information
In this section, we study a numerical example to show the benefits of side information and discuss the trade-offs between communication resources and control performance. We set the matrices to be the same as those in [17, Sec. V]
and the cost matrices are set to be identity matrices.
We start by studying an LQG system in which the side information to the decoder is given by and with , so that . For each , and , we solve (3) for each LQG cost constraint in the range and plot the optimal value of (3) as a function of in Fig. 2. The case without side information studied in  can be equivalently viewed as the case with .
In Fig. 2, we can see that for any fixed , the minimal conditional directed information decreases as
(the signal-to-noise ratio of the side information) increases. The red vertical line corresponds to the minimal cost that can be attained with clean observation available at the controller. The intersection with the LQG constraint axis corresponds to the LQG cost that is achieved without communication, that is, using the side information only. It is also interesting to note that a fixed information level, the gain due to the presence ofincreases for an increasing control cost.
In all curves with side information, the minimal directed information converges to zero as the LQG cost increases to infinity. However, in the case without side information, the curve converges to some constant known as the minimal rate needed to stabilize the system . This rate can be computed as , where denotes the
th eigenvalue of its argument. The fact that the curves converge to zero follow from the detectability of the pair(indeed, is a full-rank so that the pair is observable). We proceed to study a scenario in which the side information implies that the pair is not detectable.
, i.e., the eigenvector whose corresponding eigenvalue is. One choice of such a matrix is
In Fig. 3, the minimal directed information is plotted as a function of the LQG cost . As expected, it can be observed that the communication resources are decreasing as the side information dimension is increasing. For all observability matrices with , the curves tend to zero as the cost grows to . On the other hand, the curves that correspond to from , and the observability matrix tend to a constant when the cost is large. This constant can be calculated as the minimal rate needed to stabilize the system. In the blue curve, it is and for it is where is the only unstable eigenvalue that cannot be observed via .
Iv-B Scalar systems
where is the unique solution to the Riccati equation and can be solved in closed-form as
In the following result, we provide a closed-form for the scalar problem. The proof is in Section V-D below.
In this section, we prove our results. We start with Theorem 1 on the optimal policy structure.
V-a Proof of Theorem 1 (Optimal policy structure)
The proof follows from the following claims that will be shown consecutively thereafter.
Instead of minimizing over stochastic kernels in (5), it is sufficient to minimize over that is a deterministic function of .
The minimization domain is relaxed by allowing encoders of the form instead of (in (4)). That is, the new encoder has additional access to the observation .
It is sufficient to minimize the relaxed optimization problem over , i.e., to let the encoder depend on rather the tuple .
It is sufficient to minimize the relaxed optimization problem over Gaussian encoder outputs, i.e,
It is sufficient to minimize the relaxed optimization problem over
The optimal control is , where is the control gain.
By claim , the minimizer of the relaxed optimization problem is in the original minimization domain (II). Thus, both optimization problems have a common minimizer, and is a composition of a Kalman filter and certainty equivalence controller.
Claim : From the functional representation lemma , one can write for some deterministic function
and random variablethat is independent of . Let , and note that . Moreover, the joint distribution of and is unaffected by absorbing the controller’s randomness to the encoder (stochastic) mapping so the LQG cost remains the same. This procedure can be inductively repeated to de-randomize at all times.
Claim : Trivial, since the minimization domain is increased.
Claim : Consider a simple lower bound on the objective function,
For a fixed sequence of deterministic mappings characterizing , the lower bound (V-A) and the LQG cost are fully determined by .
We will now show by induction that is determined by . For , this claim is trivial. For the inductive step, assume that is determined by . Now, consider
and note that can be written as
which is fixed by the sequence
due to the measurement characteristics (2), the fact that is a deterministic function of and the induction hypothesis.
Claim : First, the differential entropy from (V-A) is re-written as,
We now lower bound the mutual information using (V-A),