Abstraction of Linear Consensus Networks with Guaranteed Systemic Performance Measures

by   Milad Siami, et al.
Lehigh University

A proper abstraction of a large-scale linear consensus network with a dense coupling graph is one whose number of coupling links is proportional to its number of subsystems and its performance is comparable to the original network. Optimal design problems for an abstracted network are more amenable to efficient optimization algorithms. From the implementation point of view, maintaining such networks are usually more favorable and cost effective due to their reduced communication requirements across a network. Therefore, approximating a given dense linear consensus network by a suitable abstract network is an important analysis and synthesis problem. In this paper, we develop a framework to compute an abstraction of a given large-scale linear consensus network with guaranteed performance bounds using a nearly-linear time algorithm. First, the existence of abstractions of a given network is proven. Then, we present an efficient and fast algorithm for computing a proper abstraction of a given network. Finally, we illustrate the effectiveness of our theoretical findings via several numerical simulations.



There are no comments yet.


page 13

page 14


Adaptive guaranteed-performance consensus design for high-order multiagent systems

The current paper addresses the distributed guaranteed-performance conse...

Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks

Network consensus optimization has received increasing attention in rece...

Monitoring Object Detection Abnormalities via Data-Label and Post-Algorithm Abstractions

While object detection modules are essential functionalities for any aut...

AppSlice: A system for application-centric design of 5G and edge computing applications

Applications that use edge computing and 5G to improve response times co...

Consensus in Networks Prone to Link Failures

We consider deterministic distributed algorithms solving Consensus in sy...

Formal Verification of Neural Network Controlled Autonomous Systems

In this paper, we consider the problem of formally verifying the safety ...

A Hypergraph-Partitioned Vertex Programming Approach for Large-scale Consensus Optimization

In modern data science problems, techniques for extracting value from bi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Reducing design complexity in interconnected networks of dynamical systems by means of abstraction are central in several real-world applications [1, 2, 3, 4, 5, 6]. Various notions of abstractions for dynamical systems have been widely used by researchers in the context of control systems in past decades, see [7, 8, 9, 10] and references in there, where the notion of reduction

mainly implies projecting dynamics of a system to lower dimensional state spaces. In this paper, we employ a relevant notion of abstraction in the context of interconnected dynamical network: for a given dynamical network that is defined over a coupling graph, find another dynamical system whose coupling graph is significantly sparser and its performance quality remains close to that of the original network. In this definition, abstraction can be regarded as a notion of network reduction. There are several valid reasons why reduction in this sense is useful in design, maintenance, and implementation of dynamical networks. Real-time generation of state estimation in large-scale dynamical networks can be done much more efficiently and faster if proper abstractions are utilized. Optimal control problems that involve controller design, feedback gain adjustments, rewiring existing feedback loops, and etc. are more amenable to efficient computational tools that are specifically tailored for optimization problems with sparse structures. In security- or privacy-sensitive applications such as formation control of group of autonomous drones, it is usually required to minimize communication requirements across the network to reduce risk of external intrusions. In power network applications, network authorities periodically provide access to their network data and parameters for academic (or public) studies and evaluations. In order to reduce possibility of planned malicious attacks, network authorities can perform abstractions in order to hide actual values of parameters in their networks by preserving all other important characteristics of the network that interest researchers.

The goal of this paper is to address the abstraction problem for the class of linear consensus networks. In [1], we introduce a class of operators, so called systemic performance measure, for linear consensus networks that provides a unified framework for network-wide performance assessment. Several existing and popular performance measures in the literature, such as and norms of a consensus network from a disturbance input to its output, are examples of systemic performance measures. This class of operators is obtained through our close examination of functional properties of several existing gold standard measures of performance in the context of network engineering and science. An important contribution of this reference paper is that it enables us to optimize performance of a consensus network solely based on its intrinsic features. The authors formulate several optimal design problems, such as weight adjustment as well as rewiring of coupling links, with respect to this general class of systemic performance measures and propose efficient algorithms to solve them. In [11, 12], we quantify several fundamental tradeoffs between a -based performance measure and sparsity measures of a linear consensus network. The problem of sparse consensus network design has been considered before in [5, 13, 14, 15], where they formulate an -regularized optimal control problem. The main common shortcoming of existing works in this area is that they are heavily relied on computational tools with no analytical performance guarantees for the resulting solution. More importantly, the proposed methods in these papers mainly suffer from high computational complexity as network size grows.

For a given linear consensus network with an undirected connected graph, the network abstraction problem seeks to construct a new network with a reasonably sparser graph compared to the original network such that the dynamical behavior of the two networks remains similar in an appropriately defined sense. We develop a methodology that computes abstractions of a given consensus network using a nearly-linear time 111We use to hide poly terms from the asymptotic bounds. Thus, means that there exists such that . algorithm with guaranteed systemic performance bounds, where is the number of links. Unlike other existing work on this topic in the literature, our proposed framework: (i) works for a broad class of systemic performance measures including -based performance measures, (ii) does not involve any sort of relaxations such as to ,222 We discuss some of the shortcomings of the /-regularization based sparsification methods in Section VIII. (iii) provides guarantees for the existence of a sparse solution, (iv) can partially sparsify predetermined portions of a given network; and most importantly, (v) gives guaranteed levels of performance. While our approach is relied on several existing works in algebraic graph theory [16, 17], our control theoretic contributions are threefold. First, we show that there exist proper abstractions for every given linear consensus network. Second, we develop a framework to compute a proper abstraction of a network using a fast randomized algorithm. One of the main features of our method is that while the coupling graph of the abstracted network is a subset of the coupling graph of the original network, the link weights (the strength of each coupling) in the sparsified network are adjusted accordingly to reach predetermined levels of systemic performance. Third, we prove that our method can also be applied for partial abstraction of large-scale networks, which means that we can abstract a prespecified subgraph of the original network. This is practically plausible as our algorithm can obtain an abstraction using only spatially localized information. Moreover, this allows parallel implementation of the abstraction algorithm in order to achieve comparably lower time complexity.

Ii Notation and Preliminaries

The set of real, positive real, and strictly positive real numbers are represented by , and , respectively. A matrix is generally represented by an upper case letter, say , where is the element of matrix and indicates its transposition. We assume that and denote the vector of all ones and the identity matrix, respectively. The centering matrix is defined by in which is the matrix of all ones. Notation is equivalent to matrix being positive semi-definite. A graph is represented by , where is the set of nodes, is the set of links, and is the weight function. The value of the weight function is zero for and positive for . The weighted degree of node is defined by


The neighborhood of node is denoted by set that consists of all adjacent nodes to and its cardinality is equal to the number of neighbors of node . In unweight graphs, is equal to the degree of node . The adjacency matrix of graph is defined by setting if , and otherwise. The Laplacian matrix of graph with nodes is defined by

A -by- oriented incidence matrix for and can be formed by assigning an arbitrary direction for every link of , labeling every link by a number , and letting whenever node is the head of (directed) link , if node is the tail of (directed) link , and when link is not attached to node for all possible orientations of links. The weight matrix is the -by- diagonal matrix with diagonal elements for . It follows that

Assumption 1

All graphs in this paper are assumed to be finite, simple, undirected, and connected.

According to this assumption, every considered Laplacian matrix in this paper has exactly

positive eigenvalues and one zero eigenvalue, which allow us to index them in ascending order

. The set of Laplacian matrices of all connected weighted graphs over nodes is represented by . The Moore-Penrose pseudo-inverse of is denoted by which is a square, symmetric, doubly-centered and positive semi-definite matrix. The corresponding resistance matrix to Laplacian matrix is defined by setting

in which is called the effective resistance between nodes and . Moreover, we denote the effective resistance of link by . The sparsity measure of matrix is defined by


The sparsity measure of matrix is defined by


where represents the ’th row and the ’th column of matrix . The value of the -measure of a matrix is the maximum number of nonzero elements among all rows and columns of that matrix [18].

Iii Problem Statement

Iii-a Network model

We consider a class of consensus networks that consist of a group of subsystems whose state variables , control inputs , and output variables are scalar and their dynamics evolve with time according to


for all , where is the initial condition and

is the average of all states at time instant . The impact of the uncertain environment on each agent’s dynamics is modeled by the exogenous noise/disturbance input . By applying the following linear feedback control law to the agents of this network


where is the feedback gain between subsystems and , the closed-loop dynamics of network (4)-(6) can be written in the following compact form


with initial condition , where , and denote the state vector of the entire network, the exogenous disturbance input and the output vector of the network, respectively. The Laplacian matrix is defined by


The coupling graph of the consensus network (7) is a graph with node set , link set


and weight function


One may verify that the Laplacian matrix of graph is equal to .

Assumption 2

All feedback gains (weights) satisfy the following properties for all :

(i) non-negativity: ,
(ii) symmetry: ,
(iii) simpleness: .

Property (ii) implies that feedback gains are symmetric and (iii) means that there is no self-feedback loop in the network.

Assumption 3

The coupling graph of the consensus network (7) is time-invariant.

Based on Assumption 3

, the corresponding eigenvector to the only marginally stable mode of the network is

. This mode is unobservable from the performance output as the output matrix of the network satisfies .

Iii-B Homogeneous Systemic Performance Measures

A[h][h] B[t][t] C[t][t]

Fig. 1: A Venn diagram that shows the relationship among sets , , and . The set of general systemic measures is a superset of both the set of homogeneous systemic measures and the set of spectral systemic measures . While the intersection of sets and is nonempty, there are some systemic measures that belong only to one of these sets.

A systemic measure in this paper refers to a real-valued operator over the set of all consensus networks governed by (7) with the purpose of quantifying performance of this class of networks in presence exogenous uncertainties. Since every network with dynamics (7) is uniquely determined by its Laplacian matrix, it is reasonable to define a systemic performance measure as an operator on set .

Definition 1

An operator is called a homogeneous systemic measure of order , where , if it satisfies the following properties for all matrices in :

1. Homogeneity: For all ,

2. Monotonicity: If , then

3. Convexity: For all ,

The set of all homogeneous systemic performance measures is denoted by . We adopt an axiomatic approach to introduce and categorize a general class of performance measures that captures the quintessence of a meaningful measure of performance in large-scale dynamical networks [19]. Property 1 implies that intensifying the coupling weights by ratio results in times better performance. Property 2 guarantees that strengthening couplings in a consensus network never worsens the network performance with respect to a given systemic measure. The monotonicity property induces a partial ordering on all linear consensus networks with dynamics (7). Adding new coupling links or strengthening the existing couplings will result in better performance. Property 3 is imposed for the pure purpose of having favorable (convex) network design optimization problems.

The class of systemic performance measures can be classified based on their functional properties according to Definition

1. Let us denote the set of spectral systemic performance measures by . This class consists of all measures that satisfy properties 2, 3 and orthogonal invariance333A systemic measure is orthogonally invariant if

for every orthogonal matrix

for which .
. We refer to [20] for a comprehensive study of this class of performance measures. It is proven that all measures in depend only on Laplacian eigenvalues. Let us represent the set of all general systemic performance measures that only satisfy properties 2 and 3 by . Fig. 1 shows the relationship between the sets of spectral, homogeneous, and general systemic performance measures.

Definition 2

For a given linear consensus network endowed with a homogeneous systemic measure of order , its corresponding normalized performance index is defined by


Iii-C Network Abstraction Problem

Our goal is to develop a framework to compute an abstraction of a given linear consensus network with predetermined levels of performance and sparsity (i.e., link reduction).

Definition 3

Let us consider network that is governed by (7). For a properly chosen pair of design parameters and , another is said to be an -abstraction of if and only if

(i) has at most feedback links;

(ii) is an -approximation of in the following sense


for every homogeneous systemic performance measure .

Property (i) implies that the average number of neighbors for every node in is less than , i.e.,

where and denote the set of all adjacent nodes to and the set of all links in the abstraction, respectively. Therefore, one can think of design parameter as an upper bound on the desired average number of neighbors of nodes in the abstracted network which is independent of the network size. For Property (ii), inequality (12) indicates that the resulting abstracted network has guaranteed performance bounds with respect to . The design constant is referred to as permissible performance loss parameter.

Homogeneous Systemic Performance Measure Symbol Representation
Spectral Riemann zeta function
Gamma entropy
System Hankel norm
Hardy-Schatten or system norm
Local Deviation Error for First Order Consensus Networks
Local Deviation Error for Second Order Consensus Networks with
-norm of Second Order Consensus Networks with
TABLE I: Some important examples of homogeneous systemic performance measures.

Iv Examples of Relevant Homogeneous Systemic Performance Measures

We now present some existing and widely-used systemic performance measures for linear consensus networks; a list of these measures are summarized in Table I.

Iv-a Sum of Homogeneous Spectral Functions

This class of performance measures is generated by forming summation of a given function of Laplacian eigenvalues. For a given matrix , suppose that is a decreasing homogeneous convex function. Then, the following spectral function


is a homogeneous systemic measure [20]. Moreover, if is a homogeneous function of order where , then its corresponding normalized index


is also a homogeneous systemic performance measure [20]. Some notable examples of this class of measures are discussed in the following parts.

Iv-A1 Spectral Riemann Zeta Measures

For a given network (7), its corresponding spectral Riemann zeta function of order is defined by


where are eigenvalues of [21]. According to Assumption 3, all Laplacian eigenvalues are strictly positive and, as a result, function (15) is well-defined. According to the result presented in Subsection IV-A, since for is a decreasing homogeneous convex function, the spectral function (15) is a homogeneous systemic performance measure. The homogeneous systemic performance measure is equal to the -norm squared of a first-order consensus network (7) and equal to the -norm of a second-order consensus model of a network of multiple agents (c.f. [11]).

Iv-A2 Gamma Entropy

The notion of gamma entropy arises in various applications such as the design of minimum entropy controllers and interior point polynomial-time methods in convex programming with matrix norm constraints [22]. As it is shown in [23], the notion of gamma entropy can be interpreted as a performance measure for linear time-invariant systems with random feedback controllers by relating the gamma entropy to the mean-square value of the closed-loop gain of the system. The -entropy of network (7) is defined as

where is the transfer matrix of network (7) from to [23]. In [20], it is shown that the value of the -entropy for a given linear consensus network (7) can be explicitly computed in terms of Laplacian spectrum as follows


where . Furthermore, the -entropy is a homogeneous systemic performance measure.

Iv-B Uncertainty volume

The uncertainty volume of the steady-state output covariance matrix of network (7) is defined by


in which

This quantity is widely used as an indicator of the network performance [2] and [24]. Since is the error vector that shows distance from consensus, the quantity (17) can be interpreted as volume of the steady-state error ellipsoid. It is straightforward to show this measure satisfies all properties of Definition 1.

Iv-C Hankel Norm

The Hankel norm of network (7) and transfer matrix from to is defined as the -gain from past inputs to the future outputs, i.e.,

The value of the Hankel norm of network (7) can be equivalently computed using the Hankel norm of its disagreement form [3] that is given by


where the disagreement vector is defined by


The disagreement network (18)-(19) is stable as the real part of every eigenvalue of the state matrix is strictly negative. One can verify that the transfer matrices from to in both realizations are identical. Therefore, the Hankel norm of the system from to in both representations are well-defined and equal, and is given by [25]


where the controllability Gramian is the unique solution of

and the observability Gramian is the unique solution of

It is shown in [20] that the value of the Hankel norm of network (7) is equal to

One can verify that this measure is a homogeneous systemic performance measure.

Remark 1

One may also consider the sum of the largest eigenvalues of as a performance measure. This is equivalent to evaluate the slowest modes of the network, which are the most energetic modes. This measure satisfies properties of Definition 1 as it is convex and symmetric with respect to Laplacian eigenvalues (c.f. [26, Ch. 5.2] and [27]).

Iv-D Hardy-Schatten or System Norms

The -norm of networks (7) for is defined by


where is the transfer matrix from to and for

are singular values of

. To ensure well-definedness of performance measure (22), the marginally stable mode of the network must be unobservable through the output. Thus, this performance measure remains well-defined as long as the coupling graph of the network stays connected. This class of system norms captures several important performance and robustness features of linear control systems. For instance, a direct calculation reveals that the -norm of network (7) is


This system norm quantifies the quality of noise propagation throughout the network [12]. The -norm of a network is an input-output system norm and its value for network (7) is


where is known as the algebraic connectivity of the network [3]. The value of -norm of network (7) can be interpreted as the worst attainable performance for all square integrable disturbance inputs.

In [20], the authors prove that the -norm of a given network is given by


in which and is the well-known Beta function444 for .. Moreover, this measure is a homogeneous systemic performance measure for all .

Iv-E Local deviation error:

In network (7), the local deviation of subsystem is equal to the deviation of the state of subsystem from the weighted average of states of its immediate neighbors, which can be formally defined by


The expected cumulative local deviation is then defined by


with respect to input

being a white noise process with identity covariance. The notion of local deviation can be extended and defined for velocity variables in the second-order consensus network (

96)-(97) (c.f., [11]) as follows


that is equal to the deviation of the velocity of subsystem from the weighted average of velocities of its neighbors. The expected cumulative local deviation is then given by


where it is assumed that input in network model (96)-(97) is a white noise process with identity covariance.

Theorem 1

The operators defined by (27) and (29) are homogeneous systemic performance measures. Moreover, they can also be characterized as




in which is the degree of node .

Proof 1

Let us define the total local deviation at time by


We reformulate (26) as


Therefore, we get

where is concatenation of elements for . Also, we can rewrite (32) as follows


Thus, according to [12, Thm. 5] the steady-state of is given by


Now we show this measure is a homogeneous systemic performance measure. We first show that (34) has property 1, which means

Furthermore, it is monotone, because if then we have

where for are the standard basis for the -dimensional Euclidean space. Therefore, we have that guarantees the monotonicity of . Moreover, its convexity follows from convexity of function for all . Because consider two Laplacian matrices and with node degrees and , respectively, for . Then, we get

for all . This completes the proof of the first part. For the second part, let us define the total local deviation error at time by


We similarly reformulate (28) as

Therefore, we have

where is concatenation of elements for all . Moreover, we can rewrite (35) as follows


where is given by . Therefore, the steady-state of can be characterized as


This measure is a homogeneous systemic performance measure. It is straightforward to show that (37) satisfies property 1 by verifying that

It is monotone, as if , then we have

As a result, it follows that that guarantees the monotonicity of . Finally, its convexity can be concluded from convexity of function for all .

Remark 2

For first-order consensus network (7) that are defined over -regular coupling graphs, the corresponding microscopic measure (30) scales linearly with network size. For regular lattices that are -regular graphs, our result assumes the reported result of [28] as its special case.

Fig. 2: Two isospectral graphs with six nodes [29].
Remark 3

Fig. 2 shows example of two isospectral555Two graphs are called isospectral if and only if their Laplacian matrices have the same multi-sets of eigenvalues graphs that are not isometric666This means that their adjacency matrices are not permutation-similar.. While the value of a spectral systemic performance measure is equal for both graphs, the value of an expected cumulative local deviation measure is different for each of these graphs and depend on their specific interconnection topology. This simple observation implies that systemic performance measures (30) and (31) are suitable tools to differentiate among networks with isospectral coupling graphs.

V Abstraction with Guaranteed Bounds

In this section, we develop a fast abstraction algorithm for the class of linear consensus networks (7) with guaranteed bounds with respect to the class of homogeneous systemic performance measures.

V-a Intrinsic Tradeoffs on the Best Achievable Abstractions

The abstraction goals are to reduce the number of feedback links while preserving a desired level of performance. From notation (3), one can easily verify that the value of -measure is equal to the maximum of for all nodes , which makes it a suitable surrogate for design parameter . The next result reveals an inherent interplay between sparsity and performance.

Theorem 2

For a given network (7) that is endowed with a homogenous systemic performance measure of order , suppose that . Then, there are fundamental tradeoffs between normalized performance and graph sparsity measures in the following sense




when , in which is the adjacency matrix of the coupling graph and , where is Laplacian matrix of the unweighted complete graph.

Proof 2

Since it is assumed that the coupling graph of the network is connected, the sparsity measure is always bounded from below by ; with equality sign if the coupling graph is a tree. Thus, the following inequality holds on the cone of positive semidefinite matrices

From monotonicity property, it follows that

By taking ’th root from both sides, one can conclude the desired inequality (38). When , the localized sparsity measure is always greater of equal to . Therefore, the following relation holds

By utilizing the monotonicity property, we get

The desired inequality (39) follows from taking ’th root from both sides of the inequality.

The monotonicity property of a systemic performance measure implies that link removal will lead to performance deterioration. Theorem 2 quantifies this inherent interplay by saying that sparsity and performance cannot be improved indefinitely both at the same time. As we will see in the following subsection, this is exactly why we need to perform reweighing after link elimination procedure in order to achieve an approximation that meets (12).

V-B Existence and Algorithms

The next theorem enables us to harness the monotonicity property of homogeneous systemic measures in our network approximations.

Theorem 3

Suppose that two linear consensus networks and are endowed with a homogeneous systemic performance measure of order . For a given constant , the two networks are -approximation of each other, i.e., property (12) holds, if and only if their state matrices satisfy

Proof 3

According to the monotonicity and homogeneity properties of system measures, it follows that if (40) holds then we have


Therefore, according to (41) and (12), is an -approximation of . Let us consider the following measures


for all . This operator is a homogeneous systemic performance measure of order . For all , inequality (12) yields

Thus, it follows that


Since , inequalities (43) can be rewritten as


We know that and are Laplacian matrices and (44) holds for all ; therefore, we get

This inequality can be rewritten to obtain the desired result

The result of the above theorem is crucial as it enables us to take advantage of monotonicity property of systemic performance measures in our approximations. For two given networks and