Structured decentralized control of positive systems with applications to combination drug therapy and leader selection in directed networks

We study a class of structured optimal control problems in which the main diagonal of the dynamic matrix is a linear function of the design variable. While such problems are in general challenging and nonconvex, for positive systems we prove convexity of the H_2 and H_∞ optimal control formulations which allow for arbitrary convex constraints and regularization of the control input. Moreover, we establish differentiability of the H_∞ norm when the graph associated with the dynamical generator is weakly connected and develop a customized algorithm for computing the optimal solution even in the absence of differentiability. We apply our results to the problems of leader selection in directed consensus networks and combination drug therapy for HIV treatment. In the context of leader selection, we address the combinatorial challenge by deriving upper and lower bounds on optimal performance. For combination drug therapy, we develop a customized subgradient method for efficient treatment of diseases whose mutation patterns are not connected.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/28/2019

Two-sided bounds for cost functionals of time-periodic parabolic optimal control problems

In this paper, a new technique is applied on deriving computable, guaran...
07/09/2020

Optimal control of cytotoxic and antiangiogenic therapies on prostate cancer growth

Prostate cancer can be lethal in advanced stages, for which chemotherapy...
11/11/2019

Hierarchical Optimal Coordination Framework for Connected and Automated Vehicles in Multiple Intersections

In this paper, we present a decentralized optimal control framework for ...
12/20/2019

Robust Preconditioners for Multiple Saddle Point Problems and Applications to Optimal Control Problems

In this paper we consider multiple saddle point problems with block trid...
01/28/2019

Guaranteed lower bounds for cost functionals of time-periodic parabolic optimization problems

In this paper, a new technique is shown on deriving computable, guarante...
07/17/2022

Leaderless and Multi-Leader Computation in Disconnected Anonymous Dynamic Networks

We give a simple and complete characterization of which functions can be...
09/20/2021

An Optimal Control Framework for Joint-channel Parallel MRI Reconstruction without Coil Sensitivities

Goal: This work aims at developing a novel calibration-free fast paralle...

I Introduction

Modern applications require structured controllers that cannot be designed using traditional approaches. Except in special cases, e.g., in funnel causal and quadratically invariant systems [1, 2], and in system level synthesis approach [3] in which spatial and temporal sparsity constraints are imposed on the closed-loop response, posing optimal control problems in coordinates that preserve controller structure compromises convexity of the performance metric. In this paper, we study the structured decentralized control of positive systems. While structured decentralized control is challenging in general, we show that, for positive systems, the convexity of the and optimal control formulations is not lost by imposing structural constraints. We also derive a graph theoretic condition that guarantees continuous differentiability of the performance metric and develop techniques to address combination drug therapy design and leader selection in directed consensus networks.

Positive systems arise in the modeling of systems with inherently nonnegative variables (e.g., probabilities, concentrations, and densities). Such systems have nonnegative states and outputs for nonnegative initial conditions and inputs 

[4]. In this work, we examine models of HIV mutation [5, 6, 7, 8, 9, 10] and consensus networks [11, 12, 13, 14, 15, 16], where positivity comes from nonnegativity of populations and the structure of the underlying dynamics, respectively. Decentralized control, in which only the diagonal of the dynamical generator may be modified, is a suitable paradigm for modeling the effect of drugs on HIV [5] and the influence of leaders on the dynamics of leader-follower consensus networks [12]. In these applications, the structure of decentralized control is important for capturing the efficacy of the drugs on different HIV mutants and influence of noise on the quality of consensus, respectively. This model can also be used to study chemical reaction networks and transportation networks.

The mathematical properties of positive systems can be exploited for efficient or structured controller design. In [17], the authors show that the KYP lemma greatly simplifies for positive systems, thereby enabling decentralized synthesis via Semidefinite Programming (SDP). In [18]

, it is shown that a static output-feedback problem can be solved via Linear Programming (LP) for a class of positive systems. In 

[19, 20], the authors develop necessary and sufficient conditions for robust stability of positive systems with respect to induced norm-bounded perturbations. In [21, 22]

, it is shown that the structured singular value is equal to its convex upper bound for positive systems so assessing robust stability with respect to induced

norm-bounded perturbations becomes tractable.

It has been recently shown that the design of unconstrained decentralized controllers for positive systems can be cast as a convex problem [18, 23]. However, since structural constraints cannot be handled by the LP or LMI approaches of [18, 23], references [8, 7, 9, 10] design and controllers that satisfy such constraints but achieve suboptimal performance. Furthermore, convexity of the weighted norm for structured decentralized control of positive systems was established in [24, 25] and optimized switching strategies were considered in [26, 27, 28].

The paper is organized as follows. In Section II, we formulate the regularized optimal control problem for a class positive systems. In Section III, we establish convexity of both and control problems and derive a graph theoretic condition that guarantees continuous differentiability of the objective function. In Section IV, we study leader selection in directed networks and, in Section V, we design combination drug therapy for HIV treatment. Finally, in Section VI, we conclude the paper and summarize future research directions.

Ii Problem formulation and background

We first provide background on graph theory and positive systems, describe the system under study, and formulate structured decentralized and optimal control problems.

Ii-a Background material

Notation

The set of real numbers is denoted by and the sets of nonnegative and positive reals are and . The set of Metzler matrices (matrices with nonnegative off-diagonal elements) is denoted by . We write () if has positive (nonnegative) entries and () if

is symmetric and positive (semi)-definite. We define the sparsity pattern of a vector

, , as the set of indices for which is nonzero, is the norm, and : is the adjoint of a linear operator : if it satisfies, for all and .

Definition 1 (Graph associated with a matrix)

is the graph associated with a matrix , with the set of nodes (vertices) and the set of edges , where denotes an edge pointing from node to node .

Definition 2 (Strongly connected graph)

A graph is strongly connected if there is a directed path between any two distinct nodes in .

Definition 3 (Weakly connected graph)

A graph is weakly connected if replacing its edges with undirected edges results in a strongly connected graph.

Definition 4 (Balanced graph)

A graph is balanced if, for every node , the sum of edge weights on the edges pointing to node is equal to the sum of edge weights on the edges pointing from node .

Definition 5

A dynamical system is positive if, for any nonnegative initial condition and any nonnegative input, the output is nonnegative for all time. A linear time-invariant system,

is positive if and only if , , and .

We now state three lemmas that are useful for the analysis of positive LTI systems.

Lemma 1 (from [29])

Let and let be a positive definite matrix with nonnegative entries. Then

  • ;

  • For Hurwitz , the solution to the algebraic Lyapunov equation,

    is elementwise nonnegative.

Lemma 2

The left and right principal singular vectors, and , of are nonnegative. If , and are positive and unique.

Proof:

The result follows from the application of the Perron theorem [30, Theorem 8.2.11] to and .

Lemma 3 (from [24])

Let be nonnegative. Then, for any ,

is a convex function of .

Ii-B Decentralized optimal control

We consider a class of control problems

(1)

where is the state vector, is the performance output, is the disturbance input, and is the control input. Since control enters into the dynamics in a multiplicative fashion, optimal design of for system (1) is, in general, a challenging nonconvex problem. In what follows, we introduce an assumption which implies that system (1) is positive for any . As we demonstrate in Section III, under this assumption both the and structured decentralized optimal control problems are convex.

This class of problems can be used to model a variety of control challenges that arise in, e.g., chemical reaction networks and transportation networks. In this paper, we consider leader selection in directed consensus networks as well as combination drug therapy design for HIV treatment.

Assumption 1

The matrix in (1) is Metzler, the matrices and are nonnegative, and the diagonal matrix with is a linear function of .

Our objective is to design a stabilizing diagonal matrix that minimizes amplification from to . To quantify the average effect of the impulsive disturbance input , we consider the norm of the resulting impulse response, i.e.,

(2a)
This performance metric is equivalent to the square of the norm of system (1) which also has a well-known stochastic interpretation [31]. To quantify the worst-case input-output amplification of (1), we consider the norm, defined as,
(2b)

where denotes the largest singular value of a given matrix when is Hurwitz and otherwise. To limit the size of the control input and promote desired structural properties, we consider the regularized optimal control problem,

(3)

The regularization function in (3) can be any convex function, e.g., a quadratic penalty with to limit the magnitude of , an penalty to promote sparsity of , or the indicator function associated with a convex set to ensure that . We refer the reader to [32, 33] for an overview of recent uses of regularization in control-theoretic problems.

We now review some recent results. Under Assumption 1 the matrix

is a Metzler and its largest eigenvalue is real and a convex function of

 [34]. Recently, it has been shown that the weighted norm of the response of system (1) from a nonnegative initial condition ,

is a convex function of for every  [24, 25]. Furthermore, the approach in [17] can be used to cast the problem of unstructured decentralized control of positive systems as a semidefinite program (SDP) and [23] can be used to cast it as a Linear Program (LP). However, both the SDP and the LP formulations require a change of variables that does not preserve the structure of . Consequently, it is often difficult to design controllers that are feasible for a given noninvertible operator or to impose structural constraints or penalties on .

Iii Convexity of optimal control problems

We next establish the convexity of the and norms for systems that satisfy Assumption 1, derive a graph theoretic condition that guarantees continuous differentiability of , and develop a customized algorithm for solving optimization problem (3) even in the absence of differentiability.

Iii-a Convexity of and

We first establish convexity of the optimal control problem and provide the expression for the gradient of .

Proposition 4

Let Assumption 1 hold and let be a Hurwitz matrix. Then, is a convex function of and its gradient is given by

(4)

where and are the controllability and observability gramians of the closed-loop system (1),

(5a)
(5b)
Proof:

We first establish convexity of and then derive its gradient. The square of the norm is given by,

where the controllability gramian of the closed-loop system is determined by the solution to Lyapunov equation (5a). For Hurwitz , can be expressed as,

Substituting into and rearranging terms yields,

where is the th row of and is the th column of . From Lemma 3, it follows that is a convex function of for nonnegative vectors and . Since the range of this function is and is nondecreasing over , the composition rules for convex functions [35] imply that is convex in . Convexity of follows from the linearity of the sum and integral operators.

To derive , we form the associated Lagrangian,

where is the Lagrange multiplier associated with equality constraint (5a). Taking variations of with respect to and yields Lyapunov equations (5a) and (5b) for controllability and observability gramians, respectively. Using and the adjoint of , we rewrite the Lagrangian as

Taking the variation of with respect to yields (4).

Remark 1

The quadratic cost

is also convex over a finite or infinite time horizon for a piecewise constant . This follows from [24, Lemma 4] and suggests that an approach inspired by the Model Predictive Control (MPC) framework can be used to compute a time-varying optimal control input for a finite horizon problem.

Remark 2

The expression for in Proposition 4 remains valid for any linear system and any linear operator : . However, convexity of holds under Assumption 1 and is not guaranteed in general.

We now establish the convexity of the control problem and provide expression for the subdifferential set of .

Proposition 5

Let Assumption 1 hold and let be a Hurwitz matrix. Then, is a convex function of and its subdifferential set is given by

(6)

where is the adjoint of the operator and is the simplex, .

Proof:

We first establish convexity of and then derive the expression for its subdifferential set. For positive systems, the norm achieves its largest value at  [17] and from (2b) we thus have . To show convexity of , we show that is a convex and nonnegative function of , that is a convex and nondecreasing function of a nonnegative argument , and leverage the composition rules for convex functions [35].

Since is Hurwitz, its inverse can be expressed as

(7)

Convexity of by Lemma 3 and linearity of integration implies that each element of the matrix

is convex in and, by part (a) of Lemma 1, nonnegative.

The largest singular value is a convex function of the entries of  [35],

(8)

and Lemma 2 implies that the principal singular vectors and that achieve the supremum in (8) are nonnegative for . Thus,

for any , thereby implying that is nondecreasing over . Since each element of is convex in , these properties of and the composition rules for convex functions [35] imply convexity of .

To derive , we note that the subdifferential set of the supremum over a set of differentiable functions,

is the convex hull of the gradients of the functions that achieve the supremum [36, Theorem 1.13],

Thus, the subgradient of with respect to is given by

Finally, the matrix derivative of

in conjunction with the chain rule yield (

6).

Remark 3

For a positive system all induced norms are a convex function of the transfer function matrix evaluated at zero frequncy and Lemma 3 implies that all induced norms of system (1) are a convex function of . We show convexity of the norm as it is of particular interest in our study and the proof facilitates the derivation of the gradient.

Remark 4

The adjoint of the linear operator , introduced in Assumption 1, with respect to the standard inner product is . For positive systems, Lemma 1 implies that the gramians and are nonnegative matrices. Thus, the diagonal of the matrix is positive and it follows that is a monotone function of the diagonal matrix . Similarly, and are nonnegative and thus is also a monotone function of .

Iii-B Differentiability of the norm

In general, the norm is a nondifferentiable function of the control input . Even though, under Assumption 1, the decentralized optimal control problem (3) for positive systems is convex, it is still difficult to solve because of the lack of differentiability of . Nondifferentiable objective functions often necessitate the use of subgradient methods, which can converge slowly to the optimal solution.

In what follows, we prove that is a continuously differentiable function of for weakly connected . Then, by noting that is nondifferentiable only when contains disconnected components, we develop a method for solving (3) that outperforms the standard subgradient algorithm.

Iii-B1 Differentiability under weak connectivity

In this section, we assume that the matrices and are square and that their main diagonals are positive. To show the result, we first require two technical lemmas.

Lemma 6

Let be a matrix whose main diagonal is strictly positive and whose associated graph is weakly connected. Then, the graphs associated with and have self loops and are strongly connected.

Proof:

Positivity of the main diagonal of implies that if is nonzero, then and are nonzero; by symmetry, and are also nonzero. Thus, and contain all the edges in as well as their reversed counterparts . Since is weakly connected, and are strongly connected. The presence of self loops follows directly from the positivity of the main diagonal of .

Lemma 7

Let be a matrix whose main diagonal is strictly positive and whose associated graph is weakly connected. Then, the principal singular value and the principal singular vectors of are unique.

Proof:

Note that has an edge from to if contains a directed path of length from to  [37, Lemma 1.32]. Since and are strongly connected with self loops, Lemma 6 implies the existence of such that and for all , and Perron Theorem [30, Theorem 8.2.11] implies that and have unique maximum eigenvalues for all .

The eigenvalues of and are related to the singular values of by,

and the eigenvectors of

and are determined by the right and the left singular vectors of , respectively. Since the principal eigenvalue and eigenvectors of these matrices are unique, the principal singular value and the associated singular vectors of are also unique.

Theorem 8

Let Assumption 1 hold, let be a Hurwitz matrix, and let matrices and have strictly positive main diagonals. If the graph associated with is weakly connected, is continuously differentiable.

Proof:

Lemma 1 implies that . From [37, Lemma 1.32], has an edge from to if there is a directed path of length from to in . Weak connectivity of implies weak connectivity of , , and, by (7), of .

Since is Hurwitz and Metzler, its main diagonal must be strictly negative; otherwise, for some , contradicting stability and thus the Hurwitz assumption. Equation (7) and Lemma 1 imply and, since is Metzler, can only hold if the main diagonal of is strictly positive.

Moreover, since the diagonals of and are strictly positive, is weakly connected and the diagonal of is also strictly positive. Thus, Lemma 7 implies that the principal singular value and singular vectors of are unique, that (6) is unique for each stabilizing , and that is continuously differentiable.

Iii-B2 Nondifferentiability for disconnected

Theorem 8 implies that under a mild assumption on and , is only nondifferentiable when the graph associated with has disjoint components. Proximal methods and its accelerated variants [38] generalize gradient descent to nonsmooth problems when the proximal operator of the nondifferentiable term in the objective function is readily available. However, since there is no explicit expression for the proximal operator of , in general we have to use subgradient methods to solve (3).

To a large extent, subgradient methods are inefficient because they do not guarantee descent of the objective function. However, under the following mild assumption, the subgradient of , , can be conveniently expressed and a descent direction can be obtained by solving a linear program.

Assumption 2

Without loss of generality, let be permuted such that is block diagonal and let be weakly connected for every . Moreover, the matrices and are block diagonal and partitioned conformably with the matrix .

Theorem 9

Let Assumptions 1 and 2 hold and let be a Hurwitz matrix. Then,

(9a)
where . Moreover, every element of the subgradient of can be expressed as the convex combination of a finite number of vectors corresponding to the gradients of the functions that achieve the maximum in (9a), i.e., ,
(9b)

where the columns of are given by and is the simplex.

Proof:

Since is a block diagonal matrix, so is and Assumption 2 implies that is also block diagonal. Thus,

which proves (9a). Theorem 8 implies that each is continuously differentiable which establishes (9b).

When is differentiable, we leverage the above convenient expression for to select an element of the subdifferential set which, with an abuse of terminology, we call the optimal subgradient. The optimal subgradient is guaranteed to be a descent direction for (3) and it is defined as the member of that solves

(10a)
(10b)
(10c)

where and are defined as in Theorem 9. By (9a), is the maximum of differentiable functions and problem (10) forms a search direction using the gradients of the functions that achieve that maximum. While constraint (10b) ensures that , (10c) ensures that is a descent direction for each and thereby guarantees that is a descent direction for . Finally, objective function (10a) is the maximum of the directional derivatives of in the direction , i.e., the directional derivative of the objective function in (3) in the search direction.

Problem (10) can be solved efficiently because it is a linear program. Moreover, the optimality condition for (3), , can be checked by solving a linear program to verify the existence of an such that .

Iii-B3 Customized algorithm

Ensuring a descent direction enables principled rules for step-size selection and makes problem (3) with nondifferentiable tractable via the augmented-Lagrangian-based approaches. Reformulation of (3),

(11)

leads to the associated augmented Lagrangian

where is an auxiliary variable and is a positive parameter. Formulation (11) is convenient for the alternating direction method of multipliers (ADMM) [39], which minimizes separately over and and updates until convergence. ADMM is highly sensitive to the choice of and it may require many iterations to converge. In contrast, the more mature and robust method of multipliers (MM) [40] has effective rules for adaptively updating which leads to faster convergence. It is difficult to directly apply MM to (11) because it requires joint minimization of over and both and are nondifferentiable. However, when the proximal operator of is readily available, e.g., when is the norm or an indicator function of a convex set with simple projection [41], explicit minimization over is achieved by . Substitution of into yields the proximal augmented Lagrangian [42],

where is the Moreau envelope of and is continuously differentiable, even when is not [41]; see [42] for details. Since is the only nondifferentiable component of the proximal augmented Lagrangian, the optimal subgradient (10) can be used to minimize it over . This equivalently minimizes over and leads to a tractable MM algorithm,

This algorithm minimizes the proximal augmented Lagrangian over , updates via gradient ascent, and it represents an appealing alternative to ADMM for problems of the form (3). In particular, an adaptive selection of the parameter leads to improved practical performance relative to ADMM [42].

Iv Leader selection in directed networks

We now consider the special case of system (1), in which the matrix is given by a graph Laplacian, and study the leader selection problem for directed consensus networks. The question of how to optimally assign a predetermined number of nodes to act as leaders in a network of dynamical systems with a given topology has recently emerged as a useful proxy for identifying important nodes in a network [11, 13, 14, 12, 15, 16]. Even though significant theoretical and algorithmic advances for undirected networks have been made, the leader selection problem in directed networks remains open.

Iv-a Problem formulation

We describe consensus dynamics and state the problem.

Iv-A1 Consensus dynamics

The weighted directed network with nodes and the graph Laplacian obeys consensus dynamics in which each node updates its state using relative information exchange with its neighbors,

Here, , is a weight that quantifies the importance of the edge from node to node , is a disturbance, and the aggregate dynamics are [43],

where is the graph Laplacian of the directed network [44].

The graph Laplacian always has an eigenvalue at zero that corresponds to a right eigenvector of all ones, . If this eigenvalue is simple, all node values converge to a constant in the absence of an external input . When is balanced, is the average of the initial node values. In general, where is the left eigenvector of corresponding to zero eigenvalue, . If is not strongly connected, may have additional eigenvalues at zero and the node values converge to distinct groups whose number is equal to or smaller than the multiplicity of the zero eigenvalue.

Iv-A2 Leader selection

In consensus networks, the dynamics are governed by relative information exchange and the node values converge to the network average. In the leader selection paradigm [12], certain “leader” nodes are additionally equipped with absolute information which introduces negative feedback on the states of these nodes. If suitable leader nodes are present, the dynamical generator becomes a Hurwitz matrix and the states of all nodes asymptotically converge to zero.

The node dynamics in a network with leaders is

where is the weight that node places on its absolute information. The node is a leader if , otherwise it is a follower. The aggregate dynamics can be written as

and placed in the form (1) by taking , , and . We evaluate the performance of a leader vector using the or performance metrics or , respectively. We note that this system is marginally stable in the absence of leaders and much work on consensus networks focuses on driving the deviations from the average node values to zero [45]. Instead, we here focus on driving the node values themselves to zero.

We formulate the combinatorial problem of selecting leaders to optimize either or norm as follows.

Problem 1

Given a network with a graph Laplacian and a fixed leader weight , find the optimal set of leaders that solves

where is a performance metrics described in Section II-B, with , , and .

In [13, 14], the authors derive explicit expressions for leaders in undirected networks. However, these expressions are efficient only for very few or very many leaders. Instead, we follow [12] and develop an algorithm which relaxes the integer constraint to obtain a lower bound on Problem 1

and use greedy heuristics to obtain an upper bound.

Considering leader selection in directed networks adds the challenge of ensuring stability. At the same time, we can leverage existing results on leader selection in undirected networks to derive efficient upper bounds on Problem 1.

Iv-B Stability for directed networks

For a vector of leader weights to be feasible for Problems 1, it must stabilize system (1); i.e., must be a Hurwitz matrix. When is undirected and connected, any leader will stabilize (1). However, this is not the case for directed networks. For example, making node or a leader stabilizes the network in Fig. 1, but making nodes or a leader does not. Theorem 10 provides a necessary and sufficient condition for stability.

Fig. 1: A directed network and the sparsity pattern of the corresponding graph Laplacian. This network is stabilized if and only if either node or node are made leaders.
Theorem 10

Let be a weighted directed graph Laplacian and let . The matrix is Hurwitz if and only if for all nonzero with , where is the elementwise product.

Proof:

If , . If, in addition, , we have

(12)

and therefore zero is an eigenvalue of .

Since the graph Laplacian is row stochastic and is diagonal and nonnegative, the Gershgorin circle theorem [30] implies that the eigenvalues of are at most . To show that is Hurwitz, we show that it has no eigenvalue at zero. Assume there exists a nonzero such that (12) holds. This implies that either or that The first case is not possible because, by assumption, for any such that . If the second case is true, then must also hold for all . However, if we take , then but is nonzero. This completes the proof.

Remark 5

Only the set of leader nodes is relevant to the question of stability. If does not stabilize (1), no positive weighting of the vector of leader nodes, with , will stabilize (1). Similarly if stabilizes (1), every will.

Corollary 11

If is strongly connected, any choice of leader node will stabilize (1).

Proof:

Since the graph Laplacian associated with a strongly connected graph is irreducible, the Perron-Frobenius theorem [30] implies that the left eigenvector associated with is positive. Thus, for any nonzero and system (1) is stable by Theorem 10.

Remark 6

The condition in Theorem 10 requires that there is a path from the set of leader nodes to every node in the network. This can be enforced by extracting disjoint “leader subsets” which are not influenced by the rest of the network, i.e., where if and otherwise, and which are each strongly connected components of the original network. Stability is guaranteed if there is at least one leader node in each such subset ; e.g., for the network in Fig. 1, there is one leader subset . By Corollary 11, contains all nodes when the network is strongly connected.

Iv-C Bounds for Problem 1

To approach combinatorial Problem 1, we derive bounds on its optimal objective value. These bounds can also be used to implement a branch-and-bound approach [46].

Iv-C1 Lower bound

By relaxing the combinatorial constraint in Problem 1, we formulate the optimization problem,

(13)

where is the “capped” simplex. The results of Section III, establish the convexity of problem (13). Using a recent result on efficient projection onto  [47], this problem can be solved efficiently via proximal gradient methods [38] to provide a lower bound on Problem 1.

When is not strongly connected, additional constraints can be added to enforce the condition in Theorem 10 and thus guarantee stability. Let the sets denote “leader subsets” from which a leader must be chosen, as discussed in Remark 6. Then, the convex problem

(14)

relaxes the combinatorial constraint and guarantees stability. We denote the resulting lower bounds on the optimal values of the and versions of Problem 1 with leaders by and , respectively.

Iv-C2 Upper bounds for Problem 1

If denotes the number of subsets , a stabilizing candidate solution to Problem 1 can be obtained by “rounding” the solution to (14) by taking leaders to contain the largest element from each subset and largest remaining elements. The greedy swapping algorithm proposed in [12] can further tighten this upper bound.

Recent work on leader selection in undirected networks can also provide upper bounds for Problem 1 when is balanced. The symmetric component of the Laplacian of a balanced graph, , is the Laplacian of an undirected network. The exact optimal leader set for an undirected network can be efficiently computed when is either small or large [13, 14]. Since the performance of the symmetric component of a system provides an upper bound on the performance of the original system, these sets of leaders will have better performance with than with for both the  [48, Corollary 3] and norms [49, Proposition 4].

Even when does not represent a balanced network, and are respectively upper bounded by the trace and the maximum eigenvalue of . For small numbers of leaders, they can be efficiently computed using rank-one inversion updates. A similar approach was used in [13, 14] to derive optimal leaders for undirected networks. Moreover, this approach yields a stabilizing set of leaders [48, Lemma 1].

Iv-D Additional comments

We now provide additional discussion on interesting aspects of Problem 1. We first consider the gradients of and .

Remark 7

When , we have . The matrix often appears in model reduction and corresponds to the inner product between the th columns of and .

Remark 8

When , is given by the product of and . The former quantifies how much the forcing which causes the largest overall response of system (1) affects node , and the latter captures how much the forcing at node affects the direction of the largest output response.

The optimal leader sets for balanced graphs are interesting because they are invariant under reversal of all edge directions.

Proposition 12

Let be balanced, let so that contains the reversed edges of the graph , and let