# A Separation Theorem for Joint Sensor and Actuator Scheduling with Guaranteed Performance Bounds

We study the problem of jointly designing a sparse sensor and actuator schedule for linear dynamical systems while guaranteeing a control/estimation performance that approximates the fully sensed/actuated setting. We further prove a separation principle, showing that the problem can be decomposed into finding sensor and actuator schedules separately. However, it is shown that this problem cannot be efficiently solved or approximated in polynomial, or even quasi-polynomial time for time-invariant sensor/actuator schedules; instead, we develop deterministic polynomial-time algorithms for a time-varying sensor/actuator schedule with guaranteed approximation bounds. Our main result is to provide a polynomial-time joint actuator and sensor schedule that on average selects only a constant number of sensors and actuators at each time step, irrespective of the dimension of the system. The key idea is to sparsify the controllability and observability Gramians while providing approximation guarantees for Hankel singular values. This idea is inspired by recent results in theoretical computer science literature on sparsification.

## Authors

• 4 publications
• 55 publications
01/09/2019

### Polynomial-time Capacity Calculation and Scheduling for Half-Duplex 1-2-1 Networks

This paper studies the 1-2-1 half-duplex network model, where two half-d...
11/04/2021

### Reallocation Problems with Minimum Completion Time

Reallocation scheduling is one of the most fundamental problems in vario...
11/07/2018

### A Tight Analysis of Bethe Approximation for Permanent

We prove that the permanent of nonnegative matrices can be deterministic...
03/24/2020

### On the Complexity and Approximability of Optimal Sensor Selection and Attack for Kalman Filtering

Given a linear dynamical system affected by stochastic noise, we conside...
02/09/2018

### Distributed Spanner Approximation

We address the fundamental network design problem of constructing approx...
05/13/2019

### Approximation Schemes for a Buyer with Independent Items via Symmetries

We consider a revenue-maximizing seller with n items facing a single buy...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

One of the main challenges in realizing the promise of smart urban mobility is localization, perception, mapping and control with a myriad of sensors and actuators, e.g., camera sensors, data from 3-D mapping, LIDAR, electric motor, valve, etc. A key obstacle to this vision is the information overload, and the computational complexity of perception, mapping, and control using a large set of sensing and actuating modalities. A possible solution is to find a sparse yet important subset of sensors (actuators) and use those instead of using all available measurements (actuators) (Matni and Chandrasekaran, 2016; Argha et al., 2017; Fahroo and Demetriou, 2000). When the dimension of the state is large, finding the optimal yet low cardinality subset of features is like finding a needle in a haystack: the problem is computationally difficult and provably NP-Hard (Olshevsky, 2014; Tzoumas et al., 2016a).

Often times, we are interested in reducing the control complexity, operation cost, and maintenance cost by not using all available actuators and sensors. The choice of sensors and actuators affect the performance, computational cost, and costs of the control system. As it is shown recently in Olshevsky (2014) and Tzoumas et al. (2016a), the problem of finding a sparse set of input variables such that the resulting system is controllable, is NP-hard. Even the presumably easier problem of approximating the minimum number better than a constant multiplicative factor of is also NP-hard (Olshevsky, 2014). Other results in the literature have shown network controllability by exploring approximation algorithms for the closely related subset selection problem (Olshevsky, 2014; Summers et al., 2016; Pequito et al., 2015; Nozari et al., 2019; Bopardikar, 2017). More recently, some of the authors showed that even the problem of finding a sparse set of actuators to guarantee reachability of a particular state is hard and even hard to approximate (Jadbabaie et al., 2019).

Over the past few years, controllability and observability properties of complex dynamical networks have been subjects of intense study in the controls community (Olshevsky, 2014; Pasqualetti et al., 2014; Liu and Barabási, 2016; Chanekar et al., 2017; Müller and Weber, 1972; Tzoumas et al., 2016b; Olshevsky, 2015; Pequito et al., 2017; Nozari et al., 2017; Yazıcıoğlu et al., 2016; Summers et al., 2016; Pequito et al., 2015). This interest stems from the need to steer or observe the state of large-scale, networked systems such as power grids (Chakrabortty and Ilić, 2011), social networks, biological and genetic regulatory networks (Chandra et al., 2011; Marucci et al., 2009; Rajapakse et al., 2012), and traffic networks (Siami and Skaf, 2018)

. Previous studies have been mainly focused on solving the optimal sensor/actuator selection problem using the greedy heuristic, as approximations of the corresponding sparse-subset selection problem. However, in

Jadbabaie et al. (2018a), we develop a framework to design a sparse actuator schedule for a given large-scale linear system with guaranteed performance bounds using deterministic polynomial-time and randomized approximately linear-time algorithms, and we gain new fundamental insights into approximating various performance metrics compared to the case when all actuators are chosen. In Tzoumas et al. (2018), the authors show that a separation principle holds for the Linear-Quadratic-Gaussian (LQG) control problem.

In this paper, we build upon our previous work (Jadbabaie et al., 2018a) and consider the problem of jointly designing the sparse sensor and actuator schedule for linear dynamical systems, to ensure desired performance and sparsity levels of active sensors and actuators in time and space. The joint sensor and actuator (S/A) scheduling problem involves selecting an appropriate number, activation time, position, and type of sensors and actuators. The idea is to essentially sparsify the choice of sensor and actuators both spatially and temporally. We show that by carefully designing a time-varying joint S/A selection strategy, one can choose, on average a constant number of sensors and actuators at each time, to approximate the Hankel singular values of the system, while sparsifying the sensor and actuator sets. One of our main contributions is to show that the classical time-varying joint S/A scheduling problem (originally studied by Athans (1972)), can be solved via random sampling. We also propose an alternative to submodularity-based methods and instead use recent advances in theoretical computer science.

More importantly, we prove that a separation principle holds for the problem of jointly sparsifying the sensor and actuator set with performance guarantees. We show that the joint S/A scheduling problem can be divided into two separate problems: the sparse sensor schedule and the sparse actuator schedule.

A preliminary version of some of our results in this article submitted for possible publication in a conference proceeding (Siami and Jadbabaie, 2019); however, their proofs are presented here for the first time. The manuscript also contains several new results including numerical examples, figures, tables, and proofs.

## 2 Preliminaries and Definitions

### 2.1 Mathematical Notations

Throughout the paper, discrete time index is denoted by . The sets of real (integer), and non-negative real (integer) are represented by (), and (), respectively. The set of natural numbers is denoted by . The cardinality of a set is denoted by . Capital letters, such as or , stand for real-valued matrices. The -by-identity matrix is denoted by . Notation is equivalent to matrix being positive semi-definite. The eigenvalues of are shown by . and show the largest eigenvalue and singularvalue of a matrix, respectively. The transpose of matrix is denoted by . The rank of matrix is referred to by .

### 2.2 Linear Systems, Gramian and Hankel Matrices

 x(k+1) = Ax(k) + Bu(k), (1) y(k) = Cx(k), (2)

where , , and . The state matrix describes the underlying structure of the system and the interaction strength between the agents, input matrix represents how the control input enters the system, and output matrix

shows how output vector

relates to the state vector.

The controllability and observability matrices at time (where ) are given by

 R(t) = [B AB A2B ⋯ At−1B], (3)

and

 O(t) = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣CCACA2⋯CAt−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (4)

respectively. It is well-known that from a numerical standpoint it is better to characterize controllability and observability in terms of the Gramian matrices at time defined as follows:

 P(t) = t−1∑i=0AiBB⊤(Ai)⊤ = R(t)R⊤(t), (5)

and

 Q(t) = t−1∑i=0(Ai)⊤C⊤CAi = O⊤(t)O(t). (6)
###### Assumption

Throughout the paper, we assume that the system (1)-(2) is an -state minimal realization (i.e., the reachability and controllability matrices have full row rank). However, all results presented in this paper can be modified/extended to uncontrollable and unobservable systems.

For given linear systems (1)-(2), the Hankel matrix is defined as the doubly infinite matrix

 H = ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣H1H2H3⋯H2H3H4⋯H3H4H5⋯⋮⋮⋮⋱⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ = ⎡⎢ ⎢ ⎢ ⎢⎣CCACA2⋮⎤⎥ ⎥ ⎥ ⎥⎦[BABA2B⋯] = OR,

where . The Hankel matrix can be viewed as a mapping between the past inputs and future outputs via the initial state . Since and due to Assumption 2.2, it follows that . The nonzero singular values of can be computed by solving two Lyapunov equations (for controllability and observability Gramians) as follows

 σi(H) = √λi(HH⊤) = σi(PQ) = λi(Q12PQ12).

The Hankel matrix has a special structure: the elements (blocks) in lines parallel to the anti-diagonal are identical. It is well-known that the singular values of the Hankel matrix of a linear system are fundamental invariants of the system, denoting the most controllable and observable modes (Antoulas, 2005). It is well known that the states corresponding to small nonzero Hankel singular values are difficult111The “difficulty of controllability” of the system can be considered as the energy involved in moving the system from the origin to a uniformly random point on the unit sphere. It is well-known that this quantity can be characterized in terms of controllability Gramian. Moreover, the “difficulty of observability” of the system can be considered as “how observable” the initial state is, over observation horizon. This quantity is closely related to the covariance of the estimate errors in the standard linear least squares problem and can be obtained in terms of observability Gramian. to control and observe at the same time.

The Hankel norm gives the -gain from past inputs to future outputs, and measures the extent to which past inputs effect future outputs of the system. If the input for and the output is , then the Hankel norm is given by

 ∥G(z)∥H := supu∈L2(−∞,0)∑∞t=1|y(t)|2∑∞t=1|u(−t)|2 = √λmax(PQ) = σmax(H),

where is a transfer function of dynamics (1)-(2), and is the space of square summable vector sequences in the interval (which means ). In this work, we focus on the time- Hankel matrix

 H(t) = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣H1H2H3⋯HtH2H3H4⋯Ht+1H3H4H5⋯Ht+2⋯HtHt+1Ht+2⋯H2t−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣CCACA2⋯CAt−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦[BABA2B⋯At−1B] = O(t)R(t).

Particularly, we have

 σi(H(t)) = √λi(H(t)H⊤(t))=σi(P(t)Q(t)) = λi(Q12(t)P(t)Q12(t)).
###### Remark

One way to lower the computational complexity of simulations of large-scale dynamical systems is finding a reduced-order model. A common technique for model order reduction is the optimal Hankel-norm approximation. This method provides the best approximation of the original system in the Hankel semi-norm and received significant attention and related development in the 1980s (Glover, 1987). The corresponding state-space realization is the balanced realization where as proposed Moore (1981). In standard model reduction, first we obtain the balanced realization, and then the least observable and controllable modes are truncated. However, for sparse S/A schedule, we sparsify inputs and outputs in space and time (the number of states does not change) and we utilize a different canonical state realization (see Section 5).

### 2.3 Hankel-based Performance Metrics

Similar to the systemic notions introduced in Siami and Motee (2018); Jadbabaie et al. (2018a), we define various performance metrics that capture both controllability and observability properties of the system. These measures are non-negative real-valued operators defined on the set of all linear dynamical systems governed by (1)-(2) and quantify various measures of the performance. All of the metrics depend on the symmetric combination of Gramians (i.e. ) which is a positive definite matrix. Therefore, one can define a systemic Hankel-based performance measure as an operator on the set of Gramian matrices of all -state minimal realization systems, which we represent by .222The positive-semidefinite cone is denoted by . We denote the Hankel-based Performance Metrics by . For many popular choices of , one can see that they satisfy the following properties

(i) Homogeneity: for all ,

 ρ(κA) = κρ(A);

(ii) Monotonicity: if , then

 ρ(A) ≤ ρ(B);

and we call them systemic. For example, the squared Hankel-norm of the system at time which is defined by

 ρ(Q12(t)P(t)Q12(t)):=λmax(Q12(t)P(t)Q12(t)),

is systemic. We note that similar criteria have been developed in the experiment design literature (Ravi et al., 2016; Kempthorne, 1952; Allen-Zhu et al., 2017).

## 3 Matrix Reconstruction and Sparsification

The key idea in Jadbabaie et al. (2018b) and Jadbabaie et al. (2018a) is to approximate the time- controllability Gramian as a sparse sum of rank- matrices, while controlling the approximation error. To this end, a key lemma is used in Jadbabaie et al. (2018a) from the sparsification literature (Boutsidis et al., 2014) to find sparse actuator or sensor schedules. However, in the present work, we are interested in designing a joint sparse schedule for both sensor and actuator sets; for this, we need to modify a key lemma, known as the Dual Set Lemma in Boutsidis et al. (2014) to approximate the time- Hankel singular values.

Our main result in this section shows how we can handle two sparse subsets with nonidentical indices. We then use this result later to design a deterministic algorithm for a joint sparse S/A schedule. More specifically, we need to control the singular values of the product of two matrices which can be written as the symmetrized combination of the two matrices (see Section 5). Each one of these matrices is a sparse sum of rank- matrices and they reflect controllability and observability properties of the chosen sparse S/A set.

Theorem 3 and Algorithm 1 formalize the procedure of iteratively adding one vector at a time and forming two Gramian matrices.

###### Theorem

Let and such that and where (). Given integer numbers and with and , Algorithm 1 computes a set of weights and , such that

 (t1∑i=1siviv⊤i)12(t2∑i=1riuiu⊤i)(t1∑i=1siviv⊤i)12 ⪰ e−(ϵ1+ϵ2)X,
 (t1∑i=1siviv⊤i)12(t2∑i=1riuiu⊤i)(t1∑i=1siviv⊤i)12 ⪯ eϵ1+ϵ2X,
 card{si≠0 | i∈[t1]} ≤ κ1,

and

 card{ri≠0 | i∈[t2]} ≤ κ2,

where

 ϵ1 := 2tanh−1(√nκ1), and ϵ2 := 2tanh−1(√nκ2).

Due to space limitations, we refer the interested readers to (Boutsidis et al., 2014) for more details on Algorithm 1. However, roughly speaking, the algorithm is based on choosing vectors in a greedy fashion that satisfy a set of desired properties at each step, leading to bounds on Hankel singular values. We first define two barriers or potential functions as follows:

 ϕ––(μ––,A––)=n∑i=11λi(A––)−μ––, (7)

and

 ¯ϕ(¯μ,¯A) = n∑i=11¯μ−λi(¯A). (8)

These potential functions quantify how far the eigenvalues of and are from the barriers and . These potential functions blow up as any eigenvalue nears the barriers; moreover, they show the locations of all the eigenvalues concurrently. We then define two parameters and as follows:

 L(v,δ–,A––,μ––) = v⊤(A––−(μ––+δ–)In)−2vϕ––(μ––+δ–,A––)−ϕ––(μ––,A––)−v⊤(A––−(μ––+δ–)In)−1v,

and

 U(u,¯δ,¯A,¯μ)= u⊤((¯μ+¯δ)In−¯A)−2u¯ϕ(¯μ,¯A)−¯ϕ(¯μ+¯δ,¯A)+u⊤((¯μ+¯δ)In−¯A)−1u.

The Sherman-Morrison-Woodbury formula inspires the structure of the above quantities for more details on the barrier method see (Boutsidis et al., 2014). The potential functions (7) and (8) are chosen to guide the selection of vectors and scalings at each step and to ensure steady progress of the algorithm. Small values of these potentials indicate that the eigenvalues of and do not gather near and , respectively. At each iteration, we increase the upper barrier by a fixed constant and the lower barrier by another fixed constant . It can be shown that as long as the potentials remain bounded, there must exist (at every step ) a choice of an index and weights and so that the addition of the associated rank-1 matrices to and , and the increments of barriers do not increase either potential and keep all the eigenvalues of the updated matrix between the barriers (see Algorithm 1). Repeating these steps ensures steady growth of all the eigenvalues and yields the desired result.

This algorithm is tailored from an algorithm from Boutsidis et al. (2014) (which is deterministic and requires at most ) steps for joint sparse S/A selections. We view this algorithm as a subroutine acting on sets and as

 s,r=GenDualSet(V,U,κ1,κ2).

We now present the proof of Theorem 3.

###### Proof

To prove this theorem we first use (Boutsidis et al., 2014, Lemma 1). We first define an isotropic set of vectors based on set as follows

 ¯V={¯vi=X−12vi| vi∈V}. (9)

Using (9), we have

 t1∑i=1¯vi¯v⊤i=In. (10)

Then, according to the Dual Set Lemma in Boutsidis et al. (2014) and Line 1 to Line 10 of Algorithm 1 where , we get

 (1−√nκ1)2In ⪯ t1∑i=1¯si¯vi¯v⊤i ⪯ (1+√nκ1)2In, (11)

where . This can be rewritten as follows

 e−ϵ1In ⪯ t1∑i=1si¯vi¯v⊤i ⪯ e−ϵ1In, (12)

where , and

 s=κ−11(1+√nκ1)−1s(κ1),

where is defined in Algorithm 1 and . Next, based on (10), (11), and , we get

 e−ϵ1X ⪯ t1∑i=1siviv⊤i ⪯ eϵ1X, (13)

where (13) is obtained by multiplying positive definite matrix from both sides of (11). Similarly, according to the Dual Set Lemma in Boutsidis et al. (2014) and Line 11 to Line 21 of Algorithm 1 where , we get

 (1−√nκ2)2In ⪯ t2∑i=1¯riuiu⊤i ⪯ (1+√nκ2)2In, (14)

where . This can be rewritten as follows

 e−ϵ2In ⪯ t2∑i=1riuiu⊤i ⪯ e−ϵ2In, (15)

where , and

 r=κ−12(1+√nκ2)−1r(κ2),

where is defined in Algorithm 1 and . Using (14) and the fact that , it follows

 e−ϵ2t1∑i=1siviv⊤i ⪯ (t1∑i=1siviv⊤i)12(t2∑i=1riuiu⊤i)(t1∑i=1siviv⊤i)12 ⪯ eϵ2t1∑i=1siviv⊤i. (16)

Finally combining (13) and (16), we get the desired results.

In the next section, we show how various Hankel-based measures can be approximated by selecting a sparse set of actuators and sensors.

## 4 Joint Sparse S/A Scheduling Problems

For given linear system (1)-(2) with a general underlying structure, the joint S/A scheduling problem seeks to construct a schedule of the control inputs and sensor outputs that keeps the number of active actuators and sensors much less than the fully sensed/actuated system such that the Hankel-based performance matrices of the original and the new systems are similar in an appropriately defined sense. Specifically, given a canonical linear, time-invariant system (1)-(2) with actuators, sensors and Gramians , at time , our goal is to find a joint sparse S/A schedule such that the resulting system with Hankel matrix is well-approximated, i.e.,

 ∣∣ ∣ ∣ ∣∣logρ(Q12(t)P(t)Q12(t))ρ(Q12s(t)Ps(t)Q12s(t))∣∣ ∣ ∣ ∣∣ ≤ ϵ, (17)

where is any systemic performance metric that quantifies the performance of the system for example as the -gain from past inputs to future outputs, and is the approximation factor. The systemic performance metrics are defined based on the Hankel singular values, and we will show that “close” controllability and observability Gramian matrices result in approximately the same values. Our goal here is to answer the following questions: (1) What are the minimum numbers of actuators and sensors that need to be chosen to achieve a good approximation of the system where the full sets of actuators and sensors utilized? (2) What is the relation between the numbers of selected actuators and sensors and performance loss? (3) Does a sparse approximation schedule exist with at most constant numbers of active actuators and sensors at each time? (4) What is the time complexity of choosing the subsets of actuators and sensors with guaranteed performance bounds?

In the rest of this paper, we show how some fairly recent advances in theoretical computer science can be utilized to answer these questions. Recently, Marcus, Spielman, and Srivastava introduced a new variant of the probabilistic method which ends up solving the so-called Kadison-Singer (KS) conjecture (Marcus et al., 2015). We use the solution approach to the KS conjecture together with a combination of tools from Sections 4 and 5 to find a sparse approximation of the sparse S/A scheduling problem with algorithms that have favorable time-complexity.

## 5 A Weighted Joint Sparse S/A Schedule

As a starting point, we allow for scaling of the selected input and output signals while keeping the input and output scaling bounded. The input and output scalings allow for an extra degree of freedom that could allow for further sparsification of the sensor/actuator set. Given (

1)-(2), we define a weighted, joint sensor and actuator schedule by

 σ = ({σ(s)k}t−1k=0,{σ(a)% k}t−1k=0),

where

 σ(s)k={i|si(k)>0, i∈[p]}⊆[p],
 σ(a)k={i|ai(k)>0, i∈[m]}⊆[m],

and non-negative input and output scalings (i.e., , ). The resulting system with this schedule is

 x(k+1) = Ax(k)+∑i∈σ(a)kai(k)biui(k), (18) y(k) = ∑i∈σ(s)ksi(k)eicix(k), (19)

where ’s are columns of matrix , ’s are rows of matrix , and ’s are the standard basis for ; scaling shows the strength of the -th control input at time ; and similarly shows the strength of the -th output at time . Equivalently, the dynamics can be rewritten as

 x(k+1) = Ax(k) + B(k)u(k), (20) y(k) = C(k)x(k), (21)

with time-varying input and output matrices

 B(k) = BΛ(k),

and

 C(k) = Γ(k)C,

where and are diagonal, and their nonzero diagonal entries show selected actuators and sensors at time , which means

 Λ(k)=diag(a1(k),⋯,am(k)),

and

 Γ(k)=diag(s1(k),⋯,sp(k)).

The controllability Gramian and observability Gramian at time for this system can be rewritten as

 Ps(t) = t−1∑k=0∑j∈σ(a)ka2j(k)(At−k−1bj)(At−k−1bj)⊤, (22)

and

 Qs(t) = t−1∑k=0∑j∈σ(s)ks2j(k)(cjAt−k−1)⊤(cjAt−k−1). (23)

Our goal is to keep the numbers of active sensors and actuators on average less than and , i.e.,

 (24)

and

 (25)

such that the Hankel matrix of the fully actuated/sensed system, is “close” to the Hankel matrix of the new sparsely actuated/sensed system. Of course, this approximation will require horizon lengths that are potentially longer than the dimension of the state.

###### Assumption

Throughout this paper, we assume the horizon length is fixed and is given by .

The definition below formalizes the meaning of approximation.

###### Definition ((ϵ,ds)-sensor schedule)

We call system (18)-(19) the -sensor schedule for system (1)-(2) if and only if

 e−ϵQ(t) ⪯ Qs(t) ⪯ eϵQ(t), (26)

where and are the observability Gramian matrices of systems (1)-(2) and (18)-(19), respectively. Parameter is defined by (24) as an upper bound on the average number of active sensors, and is the approximation factor.

Next we define the -actuator schedule for dynamical system (1)-(2).

###### Definition ((ϵ,da)-actuator schedule)

We call system (18)-(19) the -actuator schedule of system (1)-(2) if and only if

 e−ϵP(t) ⪯ Ps(t) ⪯ eϵP(t), (27)

where and are the controllability Gramian matrices of (1)-(2) and (18)-(19), respectively, and parameter is defined by (25) as an upper bound on the average number of active actuators, and is the approximation factor.

###### Remark

While it might appear that allowing for the choice of and might lead to amplification of output and intput signals, we note that the scaling cannot be too large because the approximations (26) and (27) are two-sided. Specifically, by taking the trace from both sides of (26) and (27), we can see that the weighted summations of ’s and ’s are bounded. Moreover, based on Definitions 5 and 5, the ranks of matrices and are the same, similarly for matrices and . Thus, the resulting -approximation remains controllable and observable (recall that we assume that the original system is controllable and observable).

We now define the joint sparse S/A schedule for system (1)-(2) based on the Hankel singular values of the system.

###### Definition ((ϵ,ds,da)-joint S/A schedule)

We call system (18)-(19) the -joint S/A schedule of system (1)-(2) if and only if

 e−ϵ(Q(t)12P(t)Q12(t)) ⪯ Q12s(t)Ps(t)Q12s(t) ⪯ eϵ(Q12(t)P(t)Q12(t)),

where , , , and are the controllability and observability Gramians of (1)-(2) and (18)-(19), respectively, and parameters and are upper bounds on the average number of active sensors and actuators, and is the approximation factor.333We should note that according to Definition 5 if system (18)-(19) is the -joint S/A schedule, then it is also -joint S/A schedule where .

The Hankel singular values can be computed from the reachability and observability Gramians. Note that and share the same eigenvalues. Therefore, the -th largest Hankel singular values of system (20)-(21) are bounded from below and above by times the -th largest Hankel singular values of system (20)-(21).

###### Remark

The Hankel singular values reflect the joint controllability and observability of the balanced states. The Hankel singular values of the fully actuated and sensed system (1)-(2) are well-approximated by the Hankel singular values of the joint S/A schedule.

#### Construction Results

The next theorem constructs a solution for the sparse weighted S/A scheduling problem in polynomial time.