Estimating and increasing the structural robustness of a network

The capability of a network to cope with threats and survive attacks is referred to as its robustness. This paper discusses one kind of robustness, commonly denoted structural robustness, which increases when the spectral radius of the adjacency matrix associated with the network decreases. We discuss computational techniques for identifying edges, whose removal may significantly reduce the spectral radius. Nonsymmetric adjacency matrices are studied with the aid of their pseudospectra. In particular, we consider nonsymmetric adjacency matrices that arise when people seek to avoid being infected by Covid-19 by wearing facial masks of different qualities.

Authors

• 3 publications
• 8 publications
• On the joint spectral radius

We prove explicit polynomial bounds for Bochi's inequalities regarding t...
03/16/2021 ∙ by Emmanuel Breuillard, et al. ∙ 0

• When Can Matrix Query Languages Discern Matrices?

We investigate when two graphs, represented by their adjacency matrices,...
03/16/2020 ∙ by Floris Geerts, et al. ∙ 0

• On the distance α-spectral radius of a connected graph

For a connected graph G and α∈ [0,1), the distance α-spectral radius of ...
01/29/2019 ∙ by H. Y. Guo, et al. ∙ 0

• Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs

10/20/2019 ∙ by Mounia Hamidouche, et al. ∙ 0

• Generalized Perron Roots and Solvability of the Absolute Value Equation

Let A be a real (n× n)-matrix. The piecewise linear equation system z-A|...
12/17/2019 ∙ by Manuel Radons, et al. ∙ 0

In graphs, the concept of adjacency is clearly defined: it is a pairwise...
09/01/2018 ∙ by Xavier Ouvrard, et al. ∙ 0

• Predicting Network Controllability Robustness: A Convolutional Neural Network Approach

Network controllability measures how well a networked system can be cont...
08/26/2019 ∙ by Yang Lou, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Networks appear in many areas, including transportation, social science, and chemistry; see, e.g., Estrada [5] and Newman [15] for many examples. An edge-weighted network is represented by a graph , which consists of a set of nodes , a set of edges that connect the nodes, and a set of edge weights that indicate the importance of the edges. The weights are assumed to be positive. For instance, in a road network, the nodes may represent cities, the edges may represent roads between the cities, and the edge weight may be proportional to the amount of traffic on the road represented by edge . We refer to a graph as undirected if for each edge , there is an edge that points in the opposite direction and has the same weight as . If this is not the case, then the graph is said to be directed.

The adjacency matrix associated with the graph has the entry if there is an edge emerging from node and ending at node ; if the graph is undirected, then also . Other matrix entries vanish. Thus, the matrix is symmetric if and only if the graph is undirected. We will assume that there are no self-loops and no multiple edges. The former implies that the diagonal entries of vanish. Typically, the number of edges, , satisfies . Then the matrix is sparse.

The maximum of the magnitudes of the eigenvalues of

is known as the spectral radius of . We will denote the spectral radius of by . It has been shown that the spectral radius is an important indicator of how flu-type infections spread in the network that is associated with the adjacency matrix ; the smaller , the less spread; see, e.g., [11, 14] and below. This paper seeks to shed light on how the spectral radius of an adjacency matrix can be reduced by targeted edge perturbations, i.e., by reducing edge-weights or removing edges. It is well known that reducing an edge-weight, or removing an edge, does not increase the spectral radius of a nonnegative matrix; see, e.g., [9, Corollary 8.1.19]. We are interested in identifying which weights should be reduced, or which edges should be removed, to achieve a possibly significant decrease of the spectral radius.

Howard et al. [10] discuss the benefits of wearing facial masks to reduce Covid-19 transmission. Several studies found or higher efficacy of facial masks in protecting the wearer of Covid-19 infections. They found that wearing a mask protects people around persons wearing masks, as well as the people who wear a mask, but to lesser degree. Also the type of mask is important; see also Gandhi et al. [6, 7] for related discussions.

Let the nodes in a graph represent people and the edge weights represent the possibility of getting a sufficient viral load to become ill with Covid-19. The modeling of facial masks of different quality results in a nonsymmetric adjacency matrix associated with the graph; see Section 3. We are interested in investigating which weights should be reduced or which edges should be removed to reduce significantly.

This paper is organized as follows. Section 2 discusses the structural robustness of an adjacency matrix . In particular, the sensitivity of the eigenvalues to perturbations of is considered. Also adjacency matrices that model the role of face masks are described. Section 3 is concerned with the calculation of the spectral radius of a large matrix, and of the determination of edges that should be eliminated, or whose weight should be reduced, to reduce the spectral radius of . Properties of the pseudospectrum of a matrix are reviewed and applied. Some large-scale computed examples are presented in Section 4, and concluding remarks can be found in Section 5.

2 Structural robustness

A formulation of structural robustness comes from spectral graph theory. Epidemiological theory predicts that if the effective infection rate of a virus in an epidemic is below the reciprocal of the spectral radius of the adjacency matrix associated with the graph that represents the network, then the virus contamination in the network dies out over time. In more detail, assume a universal virus birth rate along each edge that is connected to an infected node, and a virus death rate for each infected node. Then if the effective infection rate, given by , is below the epidemic threshold for the network, i.e., if

 βδ<1ρ(A),

then the infections tend to zero exponentially over time; see, e.g., [11] and references therein. The smaller , the higher is the structural robustness of the network against the spread a virus. Hence, in order to enhance the structural robustness of a network, one may want to reduce the weights of suitable edges in of the graph , or eliminate certain edges; see [14].

Let denote the th axis vector and consider the matrix

 Ehk=−ahkeheTk,

where the superscript denotes transposition. Regard the perturbed adjacency matrix

 ˜A=A+εEhk,

where is chosen small enough so that the matrix

is nonnegative. Assume for the moment that the graph

associated with the adjacency matrix is strongly connected, i.e., that starting at any node of the graph, one can reach any other node of the graph by traversing the edges along their directions. Then the Perron–Frobenius theorem applies; see, e.g., [9, Chapter 8]. This is equivalent to being irreducible. By the Perron–Frobenius theorem, the eigenvalue of of largest magnitude is unique and equals . This eigenvalue is commonly referred to as the Perron root of

. Moreover, right and left eigenvectors of

associated with the Perron root,

 u=[u1,u2,…,un]Tandv=[v1,v2,…,vn]T,

respectively, are unique up to scaling. They can be normalized to be of unit Euclidean norm and only have positive entries. These normalized vectors are known as the right and left Perron vectors, respectively, of . We define the spectral impact of the directed edge on the spectral radius of as the relative change of the spectral radius induced by the perturbation of the edge, i.e.,

 s(ρ(A))hk=ρ(A)−ρ(˜A)ρ(A).

A first order approximation of , when , is derived in [14, Eq. (17)] as follows. Observe that

 ρ(A)−ρ(˜A)≈−vTεEhkuvTu=εahkvhukvTu>0,

The condition number of the largest eigenvalue of is given by

 κ(ρ(A))=1vTu.

Therefore,

 s(ρ(A))hk≈αhkεκ(ρ(A))ρ(A), (1)

where

 αhk=ahkvhuk. (2)

Notice that the first order approximation (1) of the spectral impact of the edge depends on the right and left Perron vectors of , as well as on the weight of the edge . To make smaller, we may consider reducing weight(s) associated with the largest coefficients (2). To determine these coefficients, one needs the Perron vectors and .

When the matrix is symmetric, it is meaningful to require the perturbation of also be symmetric. We therefore define the symmetric perturbation matrix

 E(S)hk=−ahk(eheTk+ekeTh).

Consider the perturbed matrix

 ˜A=A+εE(S)hk

for some small . Then a first order approximation of the spectral impact of the undirected edges on the spectral radius is given by

 s(ρ(A))hk≈αhkερ(A),

where we have used the fact that the right and left Perron vectors coincide, and

 αhk=2ahkuhuk;

see [14, Eq. (21)].

Remark 1

Let the adjacency matrix be diagonalizable, i.e., , where the columns of are linearly independent eigenvectors of , and contains the eigenvalues. Then

 ρ(A)k≤∥Ak∥≤κ(X)ρ(A)k, (3)

where denotes the spectral matrix norm and is the spectral condition number of . In particular, when is symmetric, we have for all .

A walk of length starting at node and ending at node is a sequence of nodes with and such that there is an edge that points from node to node for ; see [5, 15]. Edges in a walk may be repeated. If the graph is unweighted, then the entry of equals the number of walks of length from node to node . In view of the bounds (3), it may be a good idea to eliminate, or reduce the weight of, edges in long walks. For weighted graphs, the entries of are suitably modified.

Consider the Frobenius matrix norm . The inequalities and (3) suggest that in order to reduce the most, it may be a good idea to remove nodes of with many edges, or reduce the weights of edges that emerge from or end at these nodes.

We conclude this section with a few illustrations for some weighted graphs that are associated with tridiagonal adjacency matrices. First consider the case when each node represents a person, and all persons wear the same kind of facial mask. Then the adjacency matrix is

 A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0σ\Large{O}σ0σσ0⋅⋅⋅⋅⋅⋅⋅⋅⋅σ\Large{O}σ0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈Rn×n, (4)

where the edge weight depends on properties of the mask. A high-quality mask corresponds to a small value of . The graph associated with the matrix (4) is undirected, (strongly) connected and weighted.

The Perron root of the nonnegative symmetric tridiagonal Toeplitz matrix (4) is . The Perron vector , suitably scaled, has the entries , . In particular, when

is odd, the largest entry is

, and when is even the two largest entries, and , have the same size.

Explicit formulas for eigenvalues and eigenvectors of tridiagonal Toeplitz matrices can be found in, e.g., [19].

Note that the Perron vector in Proposition 2 is independent of the numerical value of the entries of (4). Moreover, the Perron vector suggests that the node for odd, and the nodes and for even, are the most important nodes of the graph; see, e.g., Bonacich [3]. This is in agreement with the intuition that the nodes “in the middle” of the graph are the best connected nodes and, therefore, the most important ones. According to the estimate (1), edges that connect these nodes to the graph have the largest spectral impact. Consequently, to decrease the spectral radius of the matrix (4) maximally, we should reduce the weights of the edges , where

• and , or and , if is odd;

• and if is even.

Note that setting the edge-weights to zero results in a disconnected graph. It is often meaningful to keep a small positive weight. This results in an irreducible adjacency matrix. Properties of tridiagonal matrices with some “tiny” positive off-diagonal entries have been studied by Parlett and Vömel [21].

Example 2.1

Let be the symmetric tridiagonal Toeplitz matrix (4) with . Thanks to Proposition 2, one easily computes the spectral radius and the unit norm Perron vector . If one chooses to reduce the weights for the edges and , as we suggested in the above discussion, then obtains the perturbed adjacency matrix for a weighted graph,

 ˜A=A+εE(S)13,14=A−ε(e13eT14+e14eT13).

Setting yields . The spectral impact of reducing the weights for the edges and is , and its first order approximation is .

If, instead, one chooses to reduce the weights for the edges and and constructs the perturbed adjacency matrix

 ˜A=A+εE(S)1,2=A−ε(e1eT2+e2eT1),

with , one gets . Here, and .

Thus, can be seen to be significantly larger than . This example shows the reduction of the spectral radius of the adjacency matrix to be much larger when the weight of an “important” edge is reduced than when the weight of a less important edge is reduced by the same amount. This illustrates the importance of well-connected people wearing high-quality face mask, which correspond to a small edge weight.

We now turn to a more accurate model of the role of facial masks. Let node represent a person who wears a mask, and assume that the fraction of viruses penetrates the mask from the outside in unit time, and the fraction penetrates the mask from the inside in unit time. Let again the adjacency matrix be tridiagonal. The edge from to has the weight for , and the edge from to has the weight for . This yields the adjacency matrix

 A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0w(i)1w(o)2\Large{O}w(i)2w(o)10w(i)2w(o)3w(i)3w(o)2⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅w(i)n−1w(o)n\Large{O}w(i)nw(o)n−10⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈Rn×n (5)

with for all ; if person does not wear a mask, then . This model assumes that all interactions are of the same duration and that the distance between the people is the same; a rescaling of the and is required to model interactions of different durations and of people being at different distances from each other. In any case, the matrix (5) typically is nonsymmetric.

We obtain an adjacency matrix that is simpler to analyze by projecting the matrix (5) orthogonally onto the subspace of tridiagonal Toeplitz matrices of order . Let be the orthogonal projection of the matrix (5) onto . Then

 T=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0t1\Large{O}t−10t1t−10⋅⋅⋅⋅⋅⋅⋅⋅⋅t1\Large{O}t−10⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈Rn×n, (6)

where the superdiagonal entry is the average of the superdiagonal entries of the matrix (5), and the subdiagonal entry is the average of the subdiagonal entries of (5); see, e.g., [18]. When all and are positive, so are and , and it follows that the matrix (6) is irreducible.

Assume for the moment that for all . Then the matrices (5) and (6) are symmetric. It follows from a result due to Bhatia [1] that if the relative distance between these symmetric matrices (5) and (6) is small in the Frobenius norm, then the relative difference in the spectra of (5) and (6) also is small. In detail, let the matrices and be symmetric, and consider the relative distance between these matrices in the Frobenius norm,

 dM1,M2=∥M1−M2∥F∥M1∥F.

Order the eigenvalues of and of according to and . Then

 √∑ni=1(λi(M1)−λi(M2))2√∑ni=1λi(M1)2≤dM1,M2.

However, as the following example shows, the spectral radius of may be much smaller than the spectral radius of , also when is small.

Example 2.2

Let

be a symmetric tridiagonal irreducible matrix with uniformly distributed random entries in the interval

. These entries were generated with the random number generator rand in MATLAB. Let denote the closest symmetric tridiagonal Toeplitz matrix to . We obtain and

 √∑100i=1(λi(A)−λi(T))2√∑100i=1λi(A)2=0.19,

where the eigenvalues of and are ordered in non-increasing order. Figure 1 shows the eigenvalues of and as functions of their index. The extreme eigenvalues of and are seen not to be close. In particular, the spectral radius of is quite a bit smaller than the spectral radius of .

The following result is an analogue of Proposition 2 for nonsymmetric tridiagonal Toeplitz matrices.

The Perron root of the nonnegative tridiagonal Toeplitz matrix (6) is . The right Perron vector , suitably scaled, has the entries , . The left Perron vector , suitably scaled, has the entries , .

Explicit formulas for eigenvalues and eigenvectors of tridiagonal Toeplitz matrices can be found in, e.g., [19].

Remark 2

Symmetrizing the matrix (6), i.e., considering an undirected graph instead of the directed graph represented by the adjacency matrix (6), gives the symmetric adjacency matrix with Perron root

 ρ(A)=2(t−1+t12)cosπn+1.

Thus, the Perron root is determined by the arithmetic mean of and , while the Perron root of the matrix (6

) is defined by the geometric mean of these quantities; cf. Proposition

2.

Example 2.3

Let be the tridiagonal Toeplitz matrix (6) with and

. This matrix may model a situation where the probability of inhaling infected droplets is three times larger than the probability of exhaling them; e.g., people wearing chirurgical masks. Proposition

2 yields and the unit norm right and left Perron vectors and . It is easy to see that edge is a maximizer of , where ; see (1)–(2). In order to reduce the weight of , one constructs the perturbed matrix

 ˜A=A+εE12,13=A−ε12,13e12eT13.

Setting , we obtain . Thus, the spectral impact of the perturbation is ; its first order approximation

 α12,13εκ(ρ(A))ρ(A)=a12,13v12u13εκ(ρ(A))ρ(A)=0.003846

is fairly close.

This example shows, if there only is one high-quality facial mask available, which person should be wearing it to reduce the spectral radius the most. Notice that symmetrizing the matrix would have given both the matrix and the results of Example 2.1.

3 Estimating and reducing the spectral radius

This section discusses several ways to estimate the spectral radius, and the right and left Perron vectors, of a large adjacency matrix . If just is required to be nonnegative, then there is a nonnegative vector , such that . However, this vector is not necessarily unique up to scaling; see [9, Theorem 8.3.1 and p. 505]. In this section, we will assume that is a nonnegative irreducible adjacency matrix. Then its right and left Perron vectors are unique up to scaling, and can be scaled to have only positive entries. These vectors are used to determine which edge-weights to reduce to obtain a new adjacency matrix with, hopefully, a significantly reduced spectral radius. If our aim just is to determine the spectral radius of , then irreducibility is not required.

We first describe a computational method that is well suited for large networks, whose associated adjacency matrix is nonnegative and irreducible, but does not have other structure that can be exploited. Subsequently, we will discuss methods that are able to use certain structural properties.

3.1 Computation of the left and right Perron vectors of a nonnegative irreducible matrix

Let be a large nonnegative irreducible adjacency matrix. The approach of this section does not exploit any additional structure that may posses. We determine approximations of the right and left Perron vectors of by the two-sided Arnoldi method. This method was first described by Ruhe [22] and has more recently been studied and improved by Zwaan and Hochstenbach [27].

We carry out the following steps:

• Apply the two-sided Arnoldi method to to compute the Perron root , and the unit right and left Perron vectors and , respectively, with positive entries.

• Let

 E=vuT. (7)

The Perron root of the matrix satisfies

 ρ(A+εE)=ρ(A)+εvTEuvTu+O(ε2)

for sufficiently small; see Wilkinson [25, Chapter 2]. We refer to the matrix (7) as a Wilkinson perturbation. This is the worst perturbation for in the following sense. For any nonnegative matrix with , one has

 vTEuvTu=|vTEu|vTu≤∥v∥∥E∥∥u∥vTu=1vTu,

with equality for the matrix (7). Moreover,

 ρ(A+εE)−ρ(A)≈εvTEuvTu=εκ(ρ(A)).

We will let .

Note that the spectrum of may be considered a very sparse approximation of the -pseudospectrum of . The size of used in the computations may depend on whether the adjacency matrix is contaminated by errors. For instance, the edge weights may not be known exactly; see Trefethen and Embree [24] for insightful discussions on pseudospectra.

• Typically, the first order approximation

 ρ(A)+εvTEuvTu

of is sufficiently accurate. In the rare occasions when this is not the case, we can compute an improved approximation by applying the (standard) Arnoldi method described, e.g., by Saad [23], or the implicitly restarted (standard) Arnoldi method described in [13] and implemented by the MATLAB function eigs.

We note that the perturbed matrix is nonnegative and irreducible if this holds for . Indeed, if all entries of the Perron vectors are positive, then so are all entries of for .

The Perron root is a rightmost -pseudoeigenvalue of . We note that may be much larger than when the Perron root is ill-conditioned, i.e., when is small.

The analysis in Section 2 suggests that in order to reduce by removing an edge of , we should choose an edge with a large weight that corresponds to a large entry of the matrix in (7); see (1)–(2). Removing an edge corresponds to setting its edge-weight to zero. We can in the same manner choose which edge-weight to reduce to a a smaller positive value in order to reduce the spectral radius.

Example 3.1

Consider a matrix of the form (6) with and . The eigenvalues of are real and appear in pairs. Thus, there are two eigenvalues of largest magnitude. The positive one is about . Now we add suitable entries in the and positions to transform into a circulant matrix . Then and also has the eigenvalue . The remaining eigenvalues are complex-valued.

The large perturbation induced in the spectrum of by the perturbation can be explained by analyzing the structure of the matrix in (7), which we construct by using the left and right Perron vectors given in Proposition 2. Figure 2 visualizes the size of the entries in . Notice that the largest entries are confined to the bottom left corner. Thus, adding the entry in the sensitive position induces a large perturbation in the Perron root.

When the adjacency matrix is very large, we may consider replacing the vectors and in (7) by the vector and compute and by the (standard) Arnoldi or restarted Arnoldi methods to determine the structural robustness of the graph with adjacency matrix . This approach was applied in [20] to estimate pseudospectra of large matrices.

The large perturbation in illustrated in Example 3.1 would not have occurred if the sparsity structure of the matrix would have been taken into account, i.e., if one only would allow perturbations of positive edge-weights. We therefore are interested in determining perturbations of that take the sparsity structure of into account.

3.2 Approximation of the spectral radius taking the sparsity structure into account

The method in this subsection is suitable when it is desirable that the perturbation of the adjacency matrix has the same sparsity structure as . Let denote the cone of all nonnegative matrices in with same sparsity structure as , and let be the matrix in that is closest to a given nonnegative matrix with respect to the Frobenius norm. It is straightforward to verify that the matrix is obtained by replacing all the entries of outside the sparsity structure by zero. This approach takes possible uncertainty of the available edge-weights into account. The analysis in [20] lead to the following numerical method:

• Apply the two-sided Arnoldi method to to compute the Perron root , as well as the unit right and left Perron vectors and , respectively, with positive entries.

• Project into and normalize the projected matrix to have unit Frobenius norm. Let

 E=vuT|S∥vuT|S∥F. (8)

We refer to the matrix (8) as an -structured analogue of the Wilkinson perturbation. This is the worst -structured perturbation for . For any with , we have that

 vTEuvTu=|vTEu|vTu≤∥v∥∥vuT|S∥F∥u∥vTu=∥vuT|S∥FvTu,

with equality for the matrix (8); see [16]. Hence,

 ρ(A+εE)−ρ(A)≈εvTEuvTu=εκS(ρ(A)),

where

 κS(ρ(A))=∥vuT|S∥FvTu

denotes the -structured condition number of ; see [16, 12]. We let . Similarly as above, the spectrum of is a very sparse approximation of the -structured -pseudospectrum of ; see, e.g., [20].

• If desired, compute by the (standard) Arnoldi or restarted Arnoldi methods. We note that the perturbed matrix is nonnegative and irreducible if this holds for , and exhibits the same sparsity structure as .

The Perron root helps us to estimate the structural robustness of the network. Indeed, it represents an approximate -structured -pseudospectral radius of the -structured -pseudospectrum of the adjacency matrix .

We note that may be much larger than when the Perron root has a large -structured condition number .

As mentioned above, in case the network is very large, we may consider replacing the vectors and in (8) by the vector . An analogous -structured perturbation of the adjacency matrix is given by

 eeT|S∥eeT|S∥F.

We may apply the (standard) Arnoldi or implicitly restarted Arnoldi methods to estimate and

 ρ(A+εeeT|S∥eeT|S∥F).

This approach has been applied in [20] to estimate structured pseudospectra of large matrices.

3.3 Approximation of the spectral radius for perturbations of tridiagonal Toeplitz matrices

Structure respecting projections, analogous to the ones discussed in the above subsection, also can be applied to impose other structures. This subsection illustrates how they can be used to impose tridiagonal Toeplitz structure. Let be a nonnegative tridiagonal Toeplitz matrix (6). We denote by the cone of all nonnegative tridiagonal Toeplitz matrices with zero diagonal in and by the matrix in closest to a given nonnegative matrix with respect to the Frobenius norm. It is straightforward to verify that is obtained by replacing the sub- and super- diagonal entries of by their respective arithmetic mean.

To approximate the spectral radius of , we carry out the following steps:

• Apply the formulas in Proposition 2 to to compute the Perron root and the unit right and left Perron vectors and , respectively, with positive entries.

• Project into and normalize the projected matrix to have unit Frobenius norm. Let

 E=vuT|T∥vuT|T∥F. (9)

We refer to the matrix (9) as a -structured analogue of the Wilkinson perturbation. Similarly as above, we have for any with , that

 vTEuvTu=|vTEu|vTu≤∥v∥∥vuT|T∥F∥u∥vTu=∥vuT|T∥FvTu,

with equality for the matrix (9); see [17]. It follows that

 ρ(T+εE)−ρ(T)≈εvTEuvTu=εκT(ρ(T)),

where

 κT(ρ(T))=∥vuT|T∥FvTu,

denotes the -structured condition number of ; see [17, 12].

We will let . The spectrum of is a sparse approximation of the -structured -pseudospectrum of ; see, e.g., [20].

• Determine by applying Proposition 2 to . The latter matrix is nonnegative and irreducible if this holds for , and exhibits the same structure as .

The Perron root may be regarded as an approximate -structured -pseudospectral radius and provides an estimate of the structural robustness of the structured network. It may be much larger than . It is known that when considering the class of tridiagonal Toeplitz matrices the most ill-conditioned eigenvalues with regard to -structured perturbations are the eigenvalues of largest magnitude; see, e.g., [19]. We remark that an algorithm for computing the -structured pseudospectrum of a tridiagonal Toeplitz matrix and its rightmost pseudoeigenvalue is described in [4]. However, the computational cost of this algorithm can be quite large for the matrices considered in this paper.

Finally, replacing and in (9) by the vector as described above is particularly efficient when the considered subspace is ; see Section 4.2.2.

4 Numerical tests

4.1 Complex networks

4.1.1 Air500

Consider the adjacency matrix for the network Air500, which describes 24009 flight connections between the top airports within the United States based on total passenger volume during one year from July 1, 2007, to June 30, 2008; see [2]. Thus, the airports are nodes and the flights are edges in the graph determined by the network. The matrix has the entry if there is a flight that leaves from airport to airport . Generally, but not always, implies that . This makes close to symmetric.

Apply the computational steps described in Section 3.1. The Perron root is with eigenvalue condition number . Let . The Perron root , where is the matrix in (7), is . Thus, the spectral radius increases by , as we could have foreseen since . The value is an accurate approximation of the -pseudospectral radius. This is seen by determining the -pseudospectral radius by the MATLAB program package Eigtool [26]. Our approximation of the -pseudospectral radius agrees with the value determined by Eigtool in all decimal digits returned by Eigtool. Pseudospectra of are visualized in Figure 3.

Assume we are interested in removing a single route so that the structural robustness of the network is increased the most. Then this route should be an edge that maximizes over and ; see (1)–(2). For the present network, we find that the edge should be removed. The adjacency matrix so obtained is irreducible with . The edge corresponds to flights from the JFK airport in New York to the Hartsfield–Jackson airport in Atlanta.

Finally, we observe that if one replaces in (7) by the matrix of all ones normalized to have unit Frobenius norm, the increase of spectral radius results to be . Thus, this perturbation gives a significantly less accurate estimate of the sensitivity of to worst-case perturbations.

4.1.2 Airlines

Consider the adjacency matrix determined by the network Airlines with 235 nodes and 2101 edges. The nodes represent airports and the directed edges represent flights between them. This network is available at [8].

Computations described in Section 3.1 yield and the condition number . Let . The Perron root , where is the matrix (7), is . Thus, the spectral radius increases by , as we could have expected since . The spectral radius approximates the -pseudospectral radius and coincides with the digits with the value returned by Eigtool. Pseudospectra of are shown in Figure 4.

The route to remove, in order to increase the structural robustness of the network most is represented by the edge . The adjacency matrix, , obtained when setting the entry of to zero is irreducible with .

Finally, we observe that if one replaces in (7) by the matrix of all ones, normalized to have unit Frobenius norm, increases to .

4.2 Synthetic networks

This subsection considers projections of the adjacency matrix for the Air500 network.

4.2.1 The tridiagonal part of Air500

We set all entries of the adjacency matrix for the Air500 network outside the tridiagonal part of the adjacency matrix to zero. The number of flight connections is now . This yields a nonsymmetric tridiagonal matrix . Carry out the computations described in Section 3.2, with the subspace of all tridiagonal matrices with zero-diagonal in . This yields the Perron root and its -structured condition number is .

Let . The Perron root , where is the matrix in (8), is . Thus, the spectral radius increases by , as we could have foreseen since .

Computations similar to those of Subsection 4.1 suggest that in order to increase the structural stability the most by removing one edge, we should choose the edge or in . However, removal of one or both of these edges would result in a graph with a reducible adjacency matrix. To preserve irreducilbility of the adjacency matrix, one may instead schedule fewer flights on the routes that correspond to the edges and . This reduces the weight associated with these edges.

Finally, we observe that, if one replaces the matrix in (8) by the matrix of all ones, normalized to be of unit Frobenius norm, then the spectral radius increases by . Clearly, this is not an accurate estimate of the actual worst-case sensitivity of to perturbations.

4.2.2 Projection of Air500 into a tridiagonal Toeplitz structure

We construct a tridiagonal Toeplitz matrix with zero-diagonal by averaging the sub- and super- diagonals of the matrix in Section 4.2.1. Then we carry out the computations as described in Section 3.3, and make use of Proposition 2. Then and its -structured condition number is .

Let . Then , where is the matrix in (9). Thus, the spectral radius increases by . This is in agreement with .

Finally, we observe that, if one replaces the matrix in (8) by the matrix of all ones, scaled to be of unit Frobenius norm, then increases by . Thus, the latter perturbation provides a very accurate estimate of the spectral radius when the matrix in (9) is used.

5 Conclusion

It is important to be able to estimate the structural robustness of a network, and to determine which nodes to remove or weights to decrease to increase the structural robustness. This paper describes several iterative methods that can be applied to fairly large networks to gain insight into these issues. Both the sensitivity of the structural robustness to worst-case Wilkinson perturbations and to structured perturbations are discussed and illustrated.

Acknowledgment

The authors would like to thank Ian Zwaan for MATLAB code for the two-sided Arnoldi method used in the numerical experiments.

References

• [1] R. Bhatia, The distance between the eigenvalues of Hermitian matrices, Proc. Amer. Math. Soc., 96 (1986), pp. 41–42.
• [2] Biological Networks Data Sets of Newcastle University. Available at http://www.biological-networks.org/
• [3] P. Bonacich, Power and centrality: A family of measures, Am. J. Sociol., 92 (1987), pp. 1170–1182.
• [4] P. Buttà, N. Guglielmi, and S. Noschese, Computing the structured pseudospectrum of a Toeplitz matrix and its extreme points, SIAM J. Matrix Anal. Appl., 33 (2012), pp. 1300–1319.
• [5] E. Estrada, The Structure of Complex Networks: Theory and Applications, Oxford University Press, Oxford, 2011.
• [6] M. Gandhi and L. C. Marr, Uniting infectious disease and physical science principles on the importance of face masks for COVID-19, Med Commentary, 2 (2021), pp. 21–32. https://doi.org/10.1016/j.medj.2020.12.008
• [7] M. Gandhi and G. W. Rutherford, Facial masking for Covid-19 – potential for “variolation” as we await a vaccine, N. Engl. J. Med., 383; 18 (Oct. 29, 2020). https://www.nejm.org/doi/pdf/10.1056/NEJMp2026913?articleTools=true
• [8] Gephi Sample Data Sets, http://wiki.gephi.org/index.php/Datasets
• [9] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985.
• [10] J. Howard et al., An evidence review of face masks against COVID-19, PNAS, 118 (2021), Art. e2014564118.
• [11] A. Jamakovic, R. E. Kooij, P. Van Mieghem, E. R. van Dam, Robustness of networks against viruses: The role of the spectral radius, in Symposium on Communications and Vehicular Technology, 2006, pp. 35–38.
• [12] M. Karow, D. Kressner, and F. Tisseur, Structured eigenvalue condition numbers, SIAM J. Matrix Anal. Appl. 28 (2006), pp. 1052–1068.
• [13] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, SIAM, Philadelphia, 1998.
• [14] A. Milanese, J. Sun, and T. Nishikawa, Approximating spectral impact of structural perturbations in large networks, Phys. Rev. E, 81 (2010), Art. 046112.
• [15] M. E. J. Newman, Networks: An Introduction, Oxford University Press, Oxford, 2010.
• [16] S. Noschese and L. Pasquini, Eigenvalue condition numbers: Zero-structured versus traditional, J. Comput. Appl. Math., 185 (2006), pp. 174–189.
• [17] S. Noschese and L. Pasquini, Eigenvalue patterned condition numbers: Toeplitz and Hankel cases, J. Comput. Appl. Math., 206 (2007), pp. 615–624.
• [18] S. Noschese, L. Pasquini, and L. Reichel, The structured distance to normality of an irreducible real tridiagonal matrix, Electron. Trans. Numer. Anal., 28 (2007), pp. 65–77.
• [19] S. Noschese, L. Pasquini, and L. Reichel, Tridiagonal Toeplitz matrices: Properties and novel applications, Numer. Linear Algebra Appl., 20 (2013), pp. 302–326.
• [20] S. Noschese and L. Reichel, Approximated structured pseudospectra, Numer. Linear Algebra Appl., 24 (2017), Art. e2082.
• [21] B. Parlett and C. Vömel, The spectrum of a glued matrix, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 114–132.
• [22] A. Ruhe, The two-sided Arnoldi algorithm for nonsymmetric eigenvalue problems, in Matrix Pencils, eds. B. Kågström and A. Ruhe, Springer, Berlin, 1983, pp. 104–120.
• [23] Y. Saad, Numerical Methods for Large Eigenvalue Problems, 2nd ed., SIAM, Philadelphia, 2011.
• [24] L. N. Trefethen and M. Embree, Spectra and Pseudospectra, Princeton University Press, Princeton, 2005.
• [25] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.
• [26] T. G. Wright, EigTool. http://www.comlab.ox.ac.uk/pseudospectra/eigtool/, 2002.
• [27] I. N. Zwaan and M. E. Hochstenbach, Krylov–Schur-type restarts for the two-sided Arnoldi method, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 297–321.