# A block tangential Lanczos method for model reduction of large-scale first and second order dynamical systems

In this paper, we present a new approach for model reduction of large scale first and second order dynamical systems with multiple inputs and multiple outputs (MIMO). This approach is based on the projection of the initial problem onto tangential Krylov subspaces to produce a simpler reduced-order model that approximates well the behavior of the original model. We present an algorithm named: Adaptive Block Tangential Lanczos-type (ABTL) algorithm. We give some algebraic properties and present some numerical experiences to show the effectiveness of the proposed algorithms.

## Authors

• 1 publication
• 2 publications
• ### Frequency- and Time-Limited Balanced Truncation for Large-Scale Second-Order Systems

Considering the use of dynamical systems in practical applications, ofte...
01/17/2020 ∙ by Peter Benner, et al. ∙ 0

• ### Gramians, Energy Functionals and Balanced Truncation for Linear Dynamical Systems with Quadratic Outputs

Model order reduction is a technique that is used to construct low-order...
09/10/2019 ∙ by Peter Benner, et al. ∙ 0

• ### ℋ_2 Optimal Structure-Preserving Model Order Reduction of Second-Order Systems by Iterative Rational Krylov Algorithm

In this paper, we focus on structure-preserving model order reduction (S...
06/24/2020 ∙ by Md. Motlubar Rahman, et al. ∙ 0

• ### Revisiting IRKA: Connections with pole placement and backward stability

The iterative rational Krylov algorithm (IRKA) is a popular approach for...
11/13/2019 ∙ by C. Beattie, et al. ∙ 0

• ### Joint Association Graph Screening and Decomposition for Large-scale Linear Dynamical Systems

This paper studies large-scale dynamical networks where the current stat...
11/17/2014 ∙ by Yiyuan She, et al. ∙ 0

• ### Pressio: Enabling projection-based model reduction for large-scale nonlinear dynamical systems

This work introduces Pressio, an open-source project aimed at enabling l...
03/17/2020 ∙ by Francesco Rizzi, et al. ∙ 0

• ### Windowed least-squares model reduction for dynamical systems

This work proposes a windowed least-squares (WLS) approach for model-red...
10/24/2019 ∙ by Eric J. Parish, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider a linear time-invariant multi-input, multi-output linear time independent (LTI) dynamical system described by the state-space equations

 Σ:={˙x(t)=Ax(t)+Bu(t)y(t)=Cx(t), (1)

where

denotes the state vector,

and are the input and the output signal vectors, respectively. The matrix is assumed to be large and sparse and . The transfer function associated to the system in (1) is given as

 H(ω):=C(ωIn−A)−1B∈Rp×p. (2)

The goal of our model reduction approach consists in defining two orthogonal matrices and (with ) to produce a much smaller order system with the state-space form

 Σm:{˙xm(t)=Amxm(t)+Bmu(t)ym(t)=Cmxm(t), (3)

and its transfer function is defined by

 Hm(ω):=Cm(ωIm−Am)−1Bm∈Rp×p, (4)

where , and , such that the reduced system will have an output as close as possible to the one of the original system to any given input , which means that for some chosen norm, is small.

Various model reduction techniques, such as Padé approximation

[17, 27], balanced truncation [28], optimal Hankel norm [16] and Krylov subspace methods, [8, 9, 14, 21] have been used for large multi-input multi-output (MIMO) dynamical systems, see [25, 16, 20]. Balanced Truncation Model Reduction (BTMR) method is a very popular method; [1, 15]; the method preserves the stability and provides a bound for the approximation error. In the case of small to medium systems, (BTMR) can be implemented efficiently. However, for large-scale settings, the method is quite expensive to implement, because it requires the computation of two Lyapunov equations, and results in a computational complexity of and a storage requirement of , see [1, 5, 18]. In this paper, we project the system (1) onto the following block tangential Krylov subspaces defined as,

 ˜Km(A,B)=Range{(σ1In−A)−1BR1,...,(σmIn−A)−1BRm},
 ˜Km(AT,CT)=Range{(μ1In−A)−TCTL1,...,(μmIn−A)−TCTLm},

in order to obtain a small scale dynamical system. The and are the right and left interpolation points, the and are the right and left blocks tangent directions with with . Later, we will show how to choose these tangent interpolation points and directions.

The paper is organized as follows: In section 2 we give some definition used later and we introduce the tangential interpolation. In Section 3, we present the tangential block Lanczos-type method and the corresponding algorithm. Section 4 is devoted to the selection of the interpolation points and the tangential directions that are used in the construction of block tangential Krylov subspaces, and we present briefly the adaptive tangential block Lanczos-type algorithm. Section 5 we treat the model reduction of second-order systems. The last section is devoted to numerical tests and comparisons with some well known model order reduction methods.

## 2 Moments and interpolation

We first give the following definition. Given the system , its associated transfer function can be decomposed through a Laurent series expansion around a given (shift point), as follows

 H(ω)=∞∑i=0η(σ)i(ω−σ)ii!, (5)

where is called the

-th moments at

associated to the system and defined as follows

 η(σ)i=C(σIn−A)−(i+1)B=(−1)ididωiH(ω)|ω=σ,i=0,1,... (6)

In the case where the moments are called Markov parameters and are given by

 ηi=CAiB.

Problem: Given a full-order model (1) and assume that the following parameters are given:

• Left interpolation points & block tangent directions .

• Right interpolation points & block tangent directions .

The main problem is to find a reduced-order model (3) such that the associated transfer function, in (4) is a tangential interpolant to in (2), i.e.

 ⎧⎪⎨⎪⎩Hm(σi)Ri=H(σi)Rifor i=1,...,m.LTiHm(μi)=LTiH(μi) (7)

The interpolation points and tangent directions are selected to realize the model reduction goals described later.

## 3 The block tangential Lanczos-type method

Let the original transfer function be expressed as where and are such that,

 (ωIn−A)X=B,and(ωIn−A)TY=CT. (8)

Given a system of matrices and where , the approximate solution and of and are computed such that

 Xim∈Range{V1,...,Vm}  and  Yim∈Range{W1,...,Wm} (9)

and

 RiB(ω)⊥ Range{W1,...,Wm},i=1,...,p (10)
 RiC(ω)⊥ Range{V1,...,Vm},i=1,...,p (11)

where , , and are the -th columns of , , and , respectively. If we set and , then from (9), (10) and (11) , we obtain

 Xm=Vm(ωIms−Am)−1WTmB,
 Ym=Wm(ωIms−Am)−TVTmCT,

which gives the following approximate transfer function

 Hm(ω)=Cm(ωIms−Am)−1Bm,

where , and . The matrices and are bi-orthonormal, where the . Notice that the residuals can be expressed as

 RB(ω)=B−(ωIn−A)Vm(ωIms−Am)−1WTmB. (12)
 RC(ω)=CT−(ωIn−A)TWm(ωIms−Am)−TVTmCT. (13)

### 3.1 Tangential block Lanczos-type algorithm

This algorithm consists in constructing two bi-orthonormal bases, spanned by the columns of and , of the following block tangential Krylov subspaces

 Km(A,B)=Range{(σ1In−A)−1BR1,...,(σmIn−A)−1BRm}, (14)

and

 ˜Km(AT,CT)=Range{(μ1In−A)−TCTL1,...,(μmIn−A)−TCTLm}, (15)

where and are the right and left interpolation points respectively and , are the right and left tangent directions with . We present next the Block Tangential Lanczos (BTL) algorithm that allows us to construct such bases. It is summarized in the following steps.

Let and . Then, we should have the bi-orthogonality conditions for :

 {WTiVj=I,i=j,WTiVj=0,i≠j. (16)

Here we suppose that we already have the set of interpolation points , and the tangential matrix directions and . The upper block upper Hessenberg matrices and are obtained from the BTL algorithm, with

 ˜H(j)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣H1,j⋮Hj,jHj+1,j0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦and  ˜F(j)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣F1,j⋮Fj,jFj+1,j0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,for j=0,...,m.

The matrices and constructed in Step 3 of Algorithm 1 are of size and 0

is the zero matrix of size

. We define also the following matrices,

 ˜D(1)m+1=D(1)m+1⊗Is,˜D(2)m+1=D(2)m+1⊗Is

where and . With all those notations, we have the following theorem. Let and be the bi-orthonormal matrices of constructed by Algorithm 1. Then we have the following relations

 AVm+1=[Vm+1Gm+1˜D(1)m+1−BRm+1]G−1m+1, (17)

and

 ATWm+1=[Wm+1Qm+1˜D(2)m+1−CTLm+1]Q−1m+1. (18)

Let and be the matrices,

 Tm+1=[(σ1I−A)−1BR1,...,(σm+1I−A)−1BRm+1] and
 Ym+1=[(μ1I−A)−TCTL1,...,(μm+1I−A)−TCTLm+1],

then we have

 Tm+1=Vm+1Gm+1  andYm+1=Wm+1Qm+1, (19)

where and . The matrices and are block upper triangular matrices of sizes and are assumed to be non-singular.

From Algorithm 1, we have

 Vj+1Hj+1,j=(σj+1In−A)−1BRj+1−j∑i=1ViHi,jj=1,...,m, (20)

multiplying (20) on the left by and re-arranging terms, we get

 Aj+1∑i=1ViHi,j=σj+1j+1∑i=1ViHi,j−BRj+1j=1,...,m,

which gives

 AVj+1⎡⎢ ⎢ ⎢ ⎢ ⎢⎣H1,j⋮Hj,jHj+1,j⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=σj+1Vj+1⎡⎢ ⎢ ⎢ ⎢ ⎢⎣H1,j⋮Hj,jHj+1,j⎤⎥ ⎥ ⎥ ⎥ ⎥⎦−BRj+1,j=1,…,m,

that written as

 AVm+1⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣H1,j⋮Hj,jHj+1,j0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=σj+1Vj+1⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣cccH1,j⋮Hj,jHj+1,j0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦−BRj+1,j=1,…,m, (21)

where 0 is the zero matrix of size . Then for , we have

 AVm+1˜H(j)=σj+1Vj+1˜H(j)−BRj+1, (22)

now, since , we can deduce from (22), the following expression

 AVm+1[˜H(0),˜H(1),...,˜H(m)]=Vm+1[˜H(0),˜H(1),...,˜H(m)](D(1)m+1⊗Is)−BRm+1,

which ends the proof of (17). The same proof can be done for the relation (18).

For the proof of (19), we first use (20) to obtain

 j+1∑i=1ViHi,j=(σj+1In−A)−1BRj+1j=1,…,m,

which gives

 Vm+1⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣H1,j⋮Hj,jHj+1,j0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=(σj+1In−A)−1BRj+1,j=1,…,m,

it follows that

 Vm+1[˜H(0),˜H(1),...,˜H(m)]=[(σ1In−A)−1BR1,...,(σm+1In−A)−1BRm+1],

which ends the proof of the first relation of (19). In the same manner, we can prove the second relation.

Let be such that is invertible for . Let and have full-rank, where the . Let be a chosen tangential matrix directions. Then,

1. If for , then

 Hm(σ)R=H(σ)R.
2. If for , then

 LTHm(σ)=LTH(σ).
3. If both (1) and (2) hold and in addition we have , then,

 LTH′m(σ)R=LTH′(σ)R.

1) We follow the same techniques as those given in [2] for the non-block case. Define

 Pm(ω)=Vm(ωIm−Am)−1WTm(ωI−A),

and

 Qm(ω)=(ωI−A)Pm(ω)(ωI−A)−1=(sI−A)Vm(ωIm−Am)−1WTm.

It is easy to verify that and are projectors. Moreover, for all in a neighborhood of we have

 Vm=Range{V1,...,Vm}=Range(Pm(ω))=Range(I−Pm(ω)),

and

 W⊥m=Range{W1,...,Wm}⊥=Ker(Qm(ω))=Ker(I−Qm(ω)).

Observe that

 H(ω)−Hm(ω)=C(ωI−A)−1(I−Qm(ω))(ωI−A)(I−Pm(ω))(ωI−A)−1B. (23)

Evaluating the expression (23) at and multiplying by from the right, yields the first assertion, and evaluating the same expression at and multiplying by from the left, yields the second assertion

2) Now if both (1) and (2) hold and , notice that

 ((σ+ε)I−A)−1=(σI−A)−1−ε(σI−A)−2+O(ε2),

and

 ((σ+ε)Im−Am)−1=(σIm−Am)−1−ε(σIm−Am)−2+O(ε2).

Therefore, evaluating (23) at , multiplying by and , from the left and the right respectively, for , we get

 lTjH(σ+ε)ri−lTjHm(σ+ε)ri=O(ε2).

Now notice that since , we have

 limε⟶0[1ε(lTjH(σ+ε)ri−lTjH(σ)ri)−1ε(lTjHm(σ+ε)ri−lTjHm(σ)ri)]=0,

which proves the third assertion.

In the following theorem, we give the exact expression of the residual norms in a simplified and economical computational form.

## 4 An adaptive choice of the interpolation points and tangent directions

In the section, we will see how to chose the interpolation points , and tangential directions , , where . In this paper we adopted the adaptive approach, inspired by the work in [10]. For this approach, we extend our subspaces and by adding new blocks and defined as follows

 ˜Vm+1=(σm+1In−A)−1BRm+1and˜Wm+1=(σm+1In−A)−TCTLm+1, (24)

where the new interpolation point , and the new tangent direction , are computed as follows

 (Rm+1,σm+1)=argmaxω∈SmR∈Rp×s∥R∥2=1∥RB(ω)R∥2, (25)
 (Lm+1,μm+1)=argmaxω∈SmL∈Rp×s∥L∥2=1∥RC(ω)L∥2. (26)

Here is defined as the convex hull of where

are the eigenvalues of the matrix

. For solving the problem (25), we proceed as follows. First we compute the next interpolation point, by computing the norm of for each in , i.e we solve the following problem,

 σm+1=argmaxω∈Sm∥RB(ω)∥2. (27)

Then the tangent direction is computed by evaluating (25) at ,

 Rm+1=argmaxR∈Rp×s∥R∥2=1∥RB(σm+1)R∥2. (28)

We can easily prove that the tangent matrix direction is given as

 Rm+1=[r(m+1)1,...,r(m+1)s],

where the ’s are the right singular vectors corresponding to the largest singular values of the matrix . This approach of maximizing the residual norm, works efficiently for small to medium matrices, but cannot be used for large scale systems. To overcome this problem, we give the following proposition. Let and be the residuals given in (12) and (23), where and . Then we have the following new expressions

 RB(ω)=(In−VmWTm)B[Ip−RmG−1mUBm(ω)], (29)

and

 (30)

The residual can be written as

using Equation (17), we get

 Am=WTmAVm=[Gm˜D(1)m−BmRm]G−1m,

which gives,

 AVm−VmAm=[I−VmWTm]BRmG−1m,

which proves (29). In the same way we can prove (30).

The expression of given in (29) allows us to significantly reduce the computational cost while seeking the next pole and direction. In fact, applying the skinny QR decomposition

 (In−VmWTm)B=QL,

we get the simplified residual norm

 ∥RB(ω)∥2=∥∥L[Ip−RmG−1mUBm(ω)]∥∥2. (31)

This means that, solving the problem (25) requires only the computation of matrices of size