1 Introduction
One of the main challenges in realizing the promise of smart urban mobility is localization, perception, mapping and control with a myriad of sensors and actuators, e.g., camera sensors, data from 3D mapping, LIDAR, electric motor, valve, etc. A key obstacle to this vision is the information overload, and the computational complexity of perception, mapping, and control using a large set of sensing and actuating modalities. A possible solution is to find a sparse yet important subset of sensors (actuators) and use those instead of using all available measurements (actuators) (Matni and Chandrasekaran, 2016; Argha et al., 2017; Fahroo and Demetriou, 2000). When the dimension of the state is large, finding the optimal yet low cardinality subset of features is like finding a needle in a haystack: the problem is computationally difficult and provably NPHard (Olshevsky, 2014; Tzoumas et al., 2016a).
Often times, we are interested in reducing the control complexity, operation cost, and maintenance cost by not using all available actuators and sensors. The choice of sensors and actuators affect the performance, computational cost, and costs of the control system. As it is shown recently in Olshevsky (2014) and Tzoumas et al. (2016a), the problem of finding a sparse set of input variables such that the resulting system is controllable, is NPhard. Even the presumably easier problem of approximating the minimum number better than a constant multiplicative factor of is also NPhard (Olshevsky, 2014). Other results in the literature have shown network controllability by exploring approximation algorithms for the closely related subset selection problem (Olshevsky, 2014; Summers et al., 2016; Pequito et al., 2015; Nozari et al., 2019; Bopardikar, 2017). More recently, some of the authors showed that even the problem of finding a sparse set of actuators to guarantee reachability of a particular state is hard and even hard to approximate (Jadbabaie et al., 2019).
Over the past few years, controllability and observability properties of complex dynamical networks have been subjects of intense study in the controls community (Olshevsky, 2014; Pasqualetti et al., 2014; Liu and Barabási, 2016; Chanekar et al., 2017; Müller and Weber, 1972; Tzoumas et al., 2016b; Olshevsky, 2015; Pequito et al., 2017; Nozari et al., 2017; Yazıcıoğlu et al., 2016; Summers et al., 2016; Pequito et al., 2015). This interest stems from the need to steer or observe the state of largescale, networked systems such as power grids (Chakrabortty and Ilić, 2011), social networks, biological and genetic regulatory networks (Chandra et al., 2011; Marucci et al., 2009; Rajapakse et al., 2012), and traffic networks (Siami and Skaf, 2018)
. Previous studies have been mainly focused on solving the optimal sensor/actuator selection problem using the greedy heuristic, as approximations of the corresponding sparsesubset selection problem. However, in
Jadbabaie et al. (2018a), we develop a framework to design a sparse actuator schedule for a given largescale linear system with guaranteed performance bounds using deterministic polynomialtime and randomized approximately lineartime algorithms, and we gain new fundamental insights into approximating various performance metrics compared to the case when all actuators are chosen. In Tzoumas et al. (2018), the authors show that a separation principle holds for the LinearQuadraticGaussian (LQG) control problem.In this paper, we build upon our previous work (Jadbabaie et al., 2018a) and consider the problem of jointly designing the sparse sensor and actuator schedule for linear dynamical systems, to ensure desired performance and sparsity levels of active sensors and actuators in time and space. The joint sensor and actuator (S/A) scheduling problem involves selecting an appropriate number, activation time, position, and type of sensors and actuators. The idea is to essentially sparsify the choice of sensor and actuators both spatially and temporally. We show that by carefully designing a timevarying joint S/A selection strategy, one can choose, on average a constant number of sensors and actuators at each time, to approximate the Hankel singular values of the system, while sparsifying the sensor and actuator sets. One of our main contributions is to show that the classical timevarying joint S/A scheduling problem (originally studied by Athans (1972)), can be solved via random sampling. We also propose an alternative to submodularitybased methods and instead use recent advances in theoretical computer science.
More importantly, we prove that a separation principle holds for the problem of jointly sparsifying the sensor and actuator set with performance guarantees. We show that the joint S/A scheduling problem can be divided into two separate problems: the sparse sensor schedule and the sparse actuator schedule.
A preliminary version of some of our results in this article submitted for possible publication in a conference proceeding (Siami and Jadbabaie, 2019); however, their proofs are presented here for the first time. The manuscript also contains several new results including numerical examples, figures, tables, and proofs.
2 Preliminaries and Definitions
2.1 Mathematical Notations
Throughout the paper, discrete time index is denoted by . The sets of real (integer), and nonnegative real (integer) are represented by (), and (), respectively. The set of natural numbers is denoted by . The cardinality of a set is denoted by . Capital letters, such as or , stand for realvalued matrices. The byidentity matrix is denoted by . Notation is equivalent to matrix being positive semidefinite. The eigenvalues of are shown by . and show the largest eigenvalue and singularvalue of a matrix, respectively. The transpose of matrix is denoted by . The rank of matrix is referred to by .
2.2 Linear Systems, Gramian and Hankel Matrices
We start with the canonical linear discretetime, timeinvariant dynamics
(1)  
(2) 
where , , and . The state matrix describes the underlying structure of the system and the interaction strength between the agents, input matrix represents how the control input enters the system, and output matrix
shows how output vector
relates to the state vector.The controllability and observability matrices at time (where ) are given by
(3) 
and
(4) 
respectively. It is wellknown that from a numerical standpoint it is better to characterize controllability and observability in terms of the Gramian matrices at time defined as follows:
(5) 
and
(6) 
Assumption
For given linear systems (1)(2), the Hankel matrix is defined as the doubly infinite matrix
where . The Hankel matrix can be viewed as a mapping between the past inputs and future outputs via the initial state . Since and due to Assumption 2.2, it follows that . The nonzero singular values of can be computed by solving two Lyapunov equations (for controllability and observability Gramians) as follows
The Hankel matrix has a special structure: the elements (blocks) in lines parallel to the antidiagonal are identical. It is wellknown that the singular values of the Hankel matrix of a linear system are fundamental invariants of the system, denoting the most controllable and observable modes (Antoulas, 2005). It is well known that the states corresponding to small nonzero Hankel singular values are difficult^{1}^{1}1The “difficulty of controllability” of the system can be considered as the energy involved in moving the system from the origin to a uniformly random point on the unit sphere. It is wellknown that this quantity can be characterized in terms of controllability Gramian. Moreover, the “difficulty of observability” of the system can be considered as “how observable” the initial state is, over observation horizon. This quantity is closely related to the covariance of the estimate errors in the standard linear least squares problem and can be obtained in terms of observability Gramian. to control and observe at the same time.
The Hankel norm gives the gain from past inputs to future outputs, and measures the extent to which past inputs effect future outputs of the system. If the input for and the output is , then the Hankel norm is given by
where is a transfer function of dynamics (1)(2), and is the space of square summable vector sequences in the interval (which means ). In this work, we focus on the time Hankel matrix
Particularly, we have
Remark
One way to lower the computational complexity of simulations of largescale dynamical systems is finding a reducedorder model. A common technique for model order reduction is the optimal Hankelnorm approximation. This method provides the best approximation of the original system in the Hankel seminorm and received significant attention and related development in the 1980s (Glover, 1987). The corresponding statespace realization is the balanced realization where as proposed Moore (1981). In standard model reduction, first we obtain the balanced realization, and then the least observable and controllable modes are truncated. However, for sparse S/A schedule, we sparsify inputs and outputs in space and time (the number of states does not change) and we utilize a different canonical state realization (see Section 5).
2.3 Hankelbased Performance Metrics
Similar to the systemic notions introduced in Siami and Motee (2018); Jadbabaie et al. (2018a), we define various performance metrics that capture both controllability and observability properties of the system. These measures are nonnegative realvalued operators defined on the set of all linear dynamical systems governed by (1)(2) and quantify various measures of the performance. All of the metrics depend on the symmetric combination of Gramians (i.e. ) which is a positive definite matrix. Therefore, one can define a systemic Hankelbased performance measure as an operator on the set of Gramian matrices of all state minimal realization systems, which we represent by .^{2}^{2}2The positivesemidefinite cone is denoted by . We denote the Hankelbased Performance Metrics by . For many popular choices of , one can see that they satisfy the following properties
(i) Homogeneity: for all ,
(ii) Monotonicity: if , then
and we call them systemic. For example, the squared Hankelnorm of the system at time which is defined by
is systemic. We note that similar criteria have been developed in the experiment design literature (Ravi et al., 2016; Kempthorne, 1952; AllenZhu et al., 2017).
3 Matrix Reconstruction and Sparsification
The key idea in Jadbabaie et al. (2018b) and Jadbabaie et al. (2018a) is to approximate the time controllability Gramian as a sparse sum of rank matrices, while controlling the approximation error. To this end, a key lemma is used in Jadbabaie et al. (2018a) from the sparsification literature (Boutsidis et al., 2014) to find sparse actuator or sensor schedules. However, in the present work, we are interested in designing a joint sparse schedule for both sensor and actuator sets; for this, we need to modify a key lemma, known as the Dual Set Lemma in Boutsidis et al. (2014) to approximate the time Hankel singular values.
Our main result in this section shows how we can handle two sparse subsets with nonidentical indices. We then use this result later to design a deterministic algorithm for a joint sparse S/A schedule. More specifically, we need to control the singular values of the product of two matrices which can be written as the symmetrized combination of the two matrices (see Section 5). Each one of these matrices is a sparse sum of rank matrices and they reflect controllability and observability properties of the chosen sparse S/A set.
Theorem 3 and Algorithm 1 formalize the procedure of iteratively adding one vector at a time and forming two Gramian matrices.
Theorem
Let and such that and where (). Given integer numbers and with and , Algorithm 1 computes a set of weights and , such that
and
where
Due to space limitations, we refer the interested readers to (Boutsidis et al., 2014) for more details on Algorithm 1. However, roughly speaking, the algorithm is based on choosing vectors in a greedy fashion that satisfy a set of desired properties at each step, leading to bounds on Hankel singular values. We first define two barriers or potential functions as follows:
(7) 
and
(8) 
These potential functions quantify how far the eigenvalues of and are from the barriers and . These potential functions blow up as any eigenvalue nears the barriers; moreover, they show the locations of all the eigenvalues concurrently. We then define two parameters and as follows:
and
The ShermanMorrisonWoodbury formula inspires the structure of the above quantities for more details on the barrier method see (Boutsidis et al., 2014). The potential functions (7) and (8) are chosen to guide the selection of vectors and scalings at each step and to ensure steady progress of the algorithm. Small values of these potentials indicate that the eigenvalues of and do not gather near and , respectively. At each iteration, we increase the upper barrier by a fixed constant and the lower barrier by another fixed constant . It can be shown that as long as the potentials remain bounded, there must exist (at every step ) a choice of an index and weights and so that the addition of the associated rank1 matrices to and , and the increments of barriers do not increase either potential and keep all the eigenvalues of the updated matrix between the barriers (see Algorithm 1). Repeating these steps ensures steady growth of all the eigenvalues and yields the desired result.
This algorithm is tailored from an algorithm from Boutsidis et al. (2014) (which is deterministic and requires at most ) steps for joint sparse S/A selections. We view this algorithm as a subroutine acting on sets and as

We now present the proof of Theorem 3.
Proof
To prove this theorem we first use (Boutsidis et al., 2014, Lemma 1). We first define an isotropic set of vectors based on set as follows
(9) 
Using (9), we have
(10) 
Then, according to the Dual Set Lemma in Boutsidis et al. (2014) and Line 1 to Line 10 of Algorithm 1 where , we get
(11) 
where . This can be rewritten as follows
(12) 
where , and
where is defined in Algorithm 1 and . Next, based on (10), (11), and , we get
(13) 
where (13) is obtained by multiplying positive definite matrix from both sides of (11). Similarly, according to the Dual Set Lemma in Boutsidis et al. (2014) and Line 11 to Line 21 of Algorithm 1 where , we get
(14) 
where . This can be rewritten as follows
(15) 
where , and
where is defined in Algorithm 1 and . Using (14) and the fact that , it follows
(16) 
Finally combining (13) and (16), we get the desired results.
In the next section, we show how various Hankelbased measures can be approximated by selecting a sparse set of actuators and sensors.
4 Joint Sparse S/A Scheduling Problems
For given linear system (1)(2) with a general underlying structure, the joint S/A scheduling problem seeks to construct a schedule of the control inputs and sensor outputs that keeps the number of active actuators and sensors much less than the fully sensed/actuated system such that the Hankelbased performance matrices of the original and the new systems are similar in an appropriately defined sense. Specifically, given a canonical linear, timeinvariant system (1)(2) with actuators, sensors and Gramians , at time , our goal is to find a joint sparse S/A schedule such that the resulting system with Hankel matrix is wellapproximated, i.e.,
(17) 
where is any systemic performance metric that quantifies the performance of the system for example as the gain from past inputs to future outputs, and is the approximation factor. The systemic performance metrics are defined based on the Hankel singular values, and we will show that “close” controllability and observability Gramian matrices result in approximately the same values. Our goal here is to answer the following questions: (1) What are the minimum numbers of actuators and sensors that need to be chosen to achieve a good approximation of the system where the full sets of actuators and sensors utilized? (2) What is the relation between the numbers of selected actuators and sensors and performance loss? (3) Does a sparse approximation schedule exist with at most constant numbers of active actuators and sensors at each time? (4) What is the time complexity of choosing the subsets of actuators and sensors with guaranteed performance bounds?
In the rest of this paper, we show how some fairly recent advances in theoretical computer science can be utilized to answer these questions. Recently, Marcus, Spielman, and Srivastava introduced a new variant of the probabilistic method which ends up solving the socalled KadisonSinger (KS) conjecture (Marcus et al., 2015). We use the solution approach to the KS conjecture together with a combination of tools from Sections 4 and 5 to find a sparse approximation of the sparse S/A scheduling problem with algorithms that have favorable timecomplexity.
5 A Weighted Joint Sparse S/A Schedule
As a starting point, we allow for scaling of the selected input and output signals while keeping the input and output scaling bounded. The input and output scalings allow for an extra degree of freedom that could allow for further sparsification of the sensor/actuator set. Given (
1)(2), we define a weighted, joint sensor and actuator schedule bywhere
and nonnegative input and output scalings (i.e., , ). The resulting system with this schedule is
(18)  
(19) 
where ’s are columns of matrix , ’s are rows of matrix , and ’s are the standard basis for ; scaling shows the strength of the th control input at time ; and similarly shows the strength of the th output at time . Equivalently, the dynamics can be rewritten as
(20)  
(21) 
with timevarying input and output matrices
and
where and are diagonal, and their nonzero diagonal entries show selected actuators and sensors at time , which means
and
The controllability Gramian and observability Gramian at time for this system can be rewritten as
(22) 
and
(23) 
Our goal is to keep the numbers of active sensors and actuators on average less than and , i.e.,
(24) 
and
(25) 
such that the Hankel matrix of the fully actuated/sensed system, is “close” to the Hankel matrix of the new sparsely actuated/sensed system. Of course, this approximation will require horizon lengths that are potentially longer than the dimension of the state.
Assumption
Throughout this paper, we assume the horizon length is fixed and is given by .
The definition below formalizes the meaning of approximation.
Definition (sensor schedule)
We call system (18)(19) the sensor schedule for system (1)(2) if and only if
(26) 
where and are the observability Gramian matrices of systems (1)(2) and (18)(19), respectively. Parameter is defined by (24) as an upper bound on the average number of active sensors, and is the approximation factor.
Definition (actuator schedule)
We call system (18)(19) the actuator schedule of system (1)(2) if and only if
(27) 
where and are the controllability Gramian matrices of (1)(2) and (18)(19), respectively, and parameter is defined by (25) as an upper bound on the average number of active actuators, and is the approximation factor.
Remark
While it might appear that allowing for the choice of and might lead to amplification of output and intput signals, we note that the scaling cannot be too large because the approximations (26) and (27) are twosided. Specifically, by taking the trace from both sides of (26) and (27), we can see that the weighted summations of ’s and ’s are bounded. Moreover, based on Definitions 5 and 5, the ranks of matrices and are the same, similarly for matrices and . Thus, the resulting approximation remains controllable and observable (recall that we assume that the original system is controllable and observable).
We now define the joint sparse S/A schedule for system (1)(2) based on the Hankel singular values of the system.
Definition (joint S/A schedule)
We call system (18)(19) the joint S/A schedule of system (1)(2) if and only if
where , , , and are the controllability and observability Gramians of (1)(2) and (18)(19), respectively, and parameters and are upper bounds on the average number of active sensors and actuators, and is the approximation factor.^{3}^{3}3We should note that according to Definition 5 if system (18)(19) is the joint S/A schedule, then it is also joint S/A schedule where .
The Hankel singular values can be computed from the reachability and observability Gramians. Note that and share the same eigenvalues. Therefore, the th largest Hankel singular values of system (20)(21) are bounded from below and above by times the th largest Hankel singular values of system (20)(21).
Remark
Construction Results
The next theorem constructs a solution for the sparse weighted S/A scheduling problem in polynomial time.
Comments
There are no comments yet.