# Optimal robustness of passive discrete time systems

We construct optimally robust realizations of a given rational transfer function that represents a passive discrete-time system. We link it to the solution set of linear matrix inequalities defining passive transfer functions. We also consider the problem of finding the nearest passive system to a given non-passive one.

## Authors

• 15 publications
• 10 publications
09/02/2021

### Root-max Problems, Hybrid Expansion-Contraction, and Quadratically Convergent Optimization of Passive Systems

We present quadratically convergent algorithms to compute the extremal v...
02/06/2020

### Data-based computation of stabilizing minimum dwell times for discrete-time switched linear systems

We present an algorithm to compute stabilizing minimum dwell times for d...
06/27/2012

### Bounded Planning in Passive POMDPs

In Passive POMDPs actions do not affect the world state, but still incur...
10/29/2020

### A passivation algorithm for linear time-invariant systems

We propose and study an algorithm for computing a nearest passive system...
01/23/2020

### An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to Convex-Concave Saddle-Point Problems

There has been a long history of using Ordinary Differential Equations (...
01/09/2022

### A Survey of Passive Sensing in the Workplace

As new technology inches into every aspect of our lives, there is no pla...
01/05/2022

### Integrability and geometry of the Wynn recurrence

We show that the Wynn recurrence (the missing identity of Frobenius of t...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider realizations of linear discrete-time dynamical systems for which the associated transfer function is passive. Such transfer functions play a fundamental role in systems and control theory: they represent e. g., spectral density functions of stochastic processes, show up in spectral factorizations and are also related to discrete-time algebraic Riccati equations. Passive transfer functions can be described using convex sets, and this property has lead to the extensive use of convex optimization techniques in this area [5].

In this paper we show that in the set of possible realizations of a given passive transfer function, there is a subset that maximizes robustness, in the sense that their so-called passivity radius is nearly optimal. Related results for continuous-time systems were already obtained in a companion paper [15]. Here we consider the discrete-time system

 xk+1=Axk+Buk, x0=0,yk=Cxk+Duk, (1)

where , , and

are vector-valued sequences denoting, respectively, the

input, state, and output of the system. Denoting real and complex -vectors ( matrices) by , (, ), respectively, the coefficient matrices satisfy , , , and .

We restrict ourselves to systems which are minimal, i. e. the pair is controllable (for all , ), and the pair is observable (i. e.  is controllable). Here, the Hermitian (or conjugate) transpose (transpose) of a vector or matrix is denoted by (

) and the identity matrix is denoted by

or if the dimension is clear. We furthermore require that input and output dimensions are equal to .

Passive systems are well studied in the continuous-time case, starting with the works [23, 24]. Here we consider the equivalent definition in the discrete-time case and derive so-called normalized passive realizations that could be considered as “discrete-time port-Hamiltonian systems”. Similar attempts were already made in the literature [11],[19],[20].

The paper is organized as follows. After going over some preliminaries in Section 2, we characterize in Section 3 what we called normalized passive realizations of a discrete-time passive system. We then show in Section 4

their relevance in estimating the passivity radius of sicrete-time passive systems and construct in Section

5 realizations with nearly optimal robustness margin for passivity. In Section 7 we describe an algorithm to compute this robustness margin. In Section 8 we show how to use these ideas to estimate the distance to the set of discrete-time passive systems.

## 2 Passive systems

Throughout this article we will use the following notation. We denote the set of Hermitian matrices in by . Positive definiteness (semi-definiteness) of is denoted by (). The real and imaginary parts of a complex matrix are written as and , respectively, and is the imaginary unit. We consider functions over , which is a vector space if considered as a real subspace of .

The concept of passivity is well studied. We briefly recall some important properties following [24], and refer to the literature for proofs and for a more detailed survey. Consider a discrete-time system (1) with minimal state-space model

 M:={A,B,C,D}

and transfer function and define the complex analytic function of  :

 Φ(z):=TH(z−1)+T(z),

which coincides with the Hermitian part of on the unit circle:

 Φ(eıω)=[T(eıω)]H+T(eıω).

The transfer function is called strictly positive-real if for all and it is called positive-real if for all ; is called asymptotically stable

if the eigenvalues of

are in the open unit disc, and it is called stable if the eigenvalues of are in the closed unit disc, with any eigenvalues occurring on the unit circle being semi-simple. With these two properties, then is called strictly passive if it is strictly positive-real and asymptotically stable and it is called passive if it is positive real and stable.

The transfer function is the Schur complement of the so-called system pencil

 S(z):=⎡⎢ ⎢⎣0A−zInBzAH−In0CHzBHCDH+D⎤⎥ ⎥⎦ (2)

and if the model is minimal, then the finite generalized eigenvalues of are the finite zeros of . The following equivalence transformation, using an arbitrary matrix , leaves the Schur complement, and hence also the transfer function , unchanged

 ⎡⎢ ⎢⎣0A−zInBzAH−InX−AHXACH−AHXBzBHC−BHXADH+D−BHXB⎤⎥ ⎥⎦=⎡⎢ ⎢⎣In00−AHXIn0−BHX0Im⎤⎥ ⎥⎦S(z)⎡⎢ ⎢⎣In−X00In000Im⎤⎥ ⎥⎦. (3)

Let us define the submatrix of (3), given by

which we will also denote as when the underlying model is obvious from the context. Then it follows by simple algebraic manipulation that

 Φ(z)=[BH(z−1In−AH)−1Im]W(X,M)[(zIn−A)−1BIm],

and that is positive real if and only if there exists such that the Linear Matrix Inequality (LMI)

 W(X,M)≥0 (5)

holds. Moreover, is stable if and only if the matrix in this LMI is also positive definite. We will therefore make frequent use of the following sets

 X\raisebox2.0pt$>$:={X∈Hn∣∣W(X,M)≥0, X>0}, (6a) X\raisebox2.0pt$≫$:={X∈Hn∣∣W(X,M)>0, X>0}. (6b)

An important subset of are those solutions to (5) for which the rank of is minimal (i. e.  for which ). If is invertible, then the minimum rank solutions in are those for which , which in turn is the case if and only if the Schur complement of in is zero. This Schur complement is associated with the discrete-time algebraic Riccati equation (ARE)

 Ricc(X):=X−AHXA−(CH−AHXB)(DH+D−BHXB)−1(C−BHXA)=0. (7)

Solutions to (7) produce a spectral factorization of , and each solution corresponds to a invariant subspace spanned by the columns of that remains invariant under the multiplication with the matrix

 S:=[InB(DH+D)−1BH0(A−B(DH+D)−1C)H]−1[A−B(DH+D)−1C0CH(DH+D)−1CIn], (8)

i. e.  satisfies where the so-called closed loop matrix is defined as with . Such a subspace is called a Lagrangian invariant subspace and the matrix has a symplectic structure (see e.g., [14],[7]). Each solution of (7) can also be associated with an extended Lagrangian invariant subspace for the pencil , spanned by the columns of . In particular, satisfies

 ⎡⎢⎣0AB−In0CH0CDH+D⎤⎥⎦ˆU=⎡⎢⎣0−In0AH00BH00⎤⎥⎦ˆUAF.

If is singular then more complicated constructions are necessary, see [14].

In the continuous-time case, the definition of a passive systems has its origin in network theory, but its formal definition is associated with the existence of a storage function and a particular dissipation inequality. The equivalent concept for the discrete-time case again follows from the LMI (5). If we define the vector as the stacked vector of the state above the input , and construct the inner product , then we obtain the inequality

 xHkXxk−xHk+1Xxk+1+yHkuk+uHkyk=zHkW(X)zk≥0. (9)

Using the quadratic storage function this yields a dissipation inequality

 H(xk)−H(x0)≤k−1∑i=0R(yHiui)

that is similar to the one of the continuous-time formulation. It follows from the continuous-time literature [24] and the bilinear transformation between continuous-time and discrete-time systems [1] that if the system of (2) is minimal, then the LMI (5) has a solution if and only if is a passive system. Moreover, the solutions of (5) also satisfy the matrix inequalities

 0

The matrices satisfying the matrix inequalities (10) also form a convex set, which we call . We thus have the following inclusions

 X\raisebox2.0pt$≫$⊂X\raisebox2.0pt$>$⊂X±

which implies that all matrices in the sets and are bounded. Notice also that the block in the LMI (4),(6) is a discrete-time Lyapunov equation with . This implies that is asymptotically stable if and is stable if , see also [13]. It is also known that if the system is strictly passive, meaning that for the whole unit circle, then .

###### Remark 2.1.

The bilinear transformation between continuous-time and discrete-time systems preserves the solution sets and as well as the solutions and of the Riccati equation. It was shown, see e. g., [15], that the set has a nonempty interior if and only if . Since is a subset of it also follows has an empty interior when is singular.

## 3 Normalized passive realizations

A special class of realizations of discrete-time passive systems, are the ones associated to a normalized storage function .

###### Definition 3.1.

A normalized passive system has the state-space form (1) where the system matrices satisfy the matrix inequality

 [InCHCDH+D]−[AHBH][AB]≥0. (11)

We now show that every passive system has an equivalent normalized passive realization. Consider a minimal state-space model of a passive linear time-invariant system and let be a solution of the LMI (5). We then use a (Cholesky like) factorization which implies and define a new realization

 MT:={AT,BT,CT,DT}:={TAT−1,TB,CT−1,D}

so that

which expresses that the transformed realization is now normalized. Notice that the factor is unique up to a unitary factor since . This unitary factor does not affect the normalization constraint, but we can choose it to put in a special coordinate system. Notice that the inequality implies that

is contractive and has a singular value decomposition

where . The additional unitary similarity transformation will then yield a new normalized coordinate system where, in addition, , which is a polar decomposition with a positive semidefinite Hermitian factor that is diagonal and satisfies [10].

Even after the normalization, there is typically still a lot of freedom in the representation of the system, since we could have used any matrix from the set to normalize our realization. In the remainder of this paper, we will focus on normalized passive realizations. The freedom remaining is thus the choice of the matrix from , which, as we will see, can be used to make the representation more robust, i.e., less sensitive to perturbations. The remainder of this paper will deal with the question of how to make use of this freedom in the state space transformation to determine a ’good’ or ‘nearly optimal’ normalized realization.

Our goal is to achieve ‘good’ or ‘nearly optimal’ normalized realizations of a passive system. A natural measure for this is a large passivity radius , which is the smallest perturbation (in an appropriate norm) to the coefficients of a model that causes the perturbed system to loose this property.

Once we have determined a solution to the LMI (5), we can determine the normalized representations as discussed in Section 3. For each such representation we can determine the passivity radius and then choose the solution which is most robust under perturbations of the model parameters . This is a suitable approach for perturbation analysis, since as soon as we fix , we will see that we can solve for the smallest perturbation to our model that makes . To measure the size of the perturbation of a state space model we will use the Frobenius norm or the 2-norm of the matrix defined as

 ΔS:=[ΔAΔBΔCΔD] (14)

and we use also the notion of -passivity radius, which was introduced in [2], and gives a bound for the usual passivity radius.

###### Definition 4.1.

For the -passivity radius is defined as

Note that in order to compute for the model , we must have a point , since must be positive definite to start with and also should be positive definite to obtain a state-space transformation from it. The following relation between the -passivity radius and the usual passivity radius was already presented in [2].

###### Lemma 4.2.

The passivity radius for a given model satisfies

 ρM:=supX∈X\raisebox2.0pt$≫$infΔS∈Cn+m,n+m{∥ΔS∥|detW(X,M+ΔM)=0}=supX∈X\raisebox2.0pt$≫$ρM(X).

We now provide an exact formula for the -passivity radius based on a one parameter optimization problem. For this, we point out that the condition is equivalent to the condition

 ˆW(X,M+ΔM):=⎡⎢ ⎢⎣X−1A+ΔAB+ΔBAH+ΔHAXCH+ΔHCBH+ΔHBC+ΔCDH+ΔHD+D+ΔD⎤⎥ ⎥⎦>0, (15)

which is now an LMI in the unknown parameters of (for a fixed ). Setting

 (16)

and using the matrix in (14), this inequality can be written as the structured LMI

 ˆW+E[0ΔSΔHS0]ET>0 (17)

as long as the system is still passive. In order to violate this condition, we need to find the smallest such that the determinant of (17) becomes 0. Since is positive definite, we can then construct its Cholesky factorization . The matrix in (17) will become singular when the matrix

 I2n+m+R−HE[0ΔSΔHS0]ETR−1 (18)

becomes singular. The following theorem, is analogous to results obtained for continuous-time systems [2, 15, 18], and we therefore omit the proof. It gives for this kind of problem the minimum norm perturbation both in Frobenius norm and in 2-norm.

###### Theorem 4.3.

Consider the matrices in (16) and the pointwise positive semidefinite matrix function

 M(γ):=[γFH1γ−1FH2][γF1γ−1F2],F1:=R−HE1,F2:=R−HE2 (19)

in the real parameter . Then the largest eigenvalue is a unimodal function of (i.e. it is first monotonically decreasing and then monotonically increasing with growing ). At the minimizing value ,

has an eigenvector

, i.e.

where . The minimum norm perturbation is of rank and is given by . It has norm both in 2-norm and in Frobenius norm.

A simple bound for can also be obtained, as pointed out in [2] for the continuous-time case. The proof is essentially the same and is therefore omitted.

###### Corollary 4.4.

Consider the matrices , , and in Theorem 4.3, and define and . Then the norm of is also the norm of , and

 λ––max=∥M(γ––)∥2=minγ>0∥M(γ)∥2=minγ>0∥γ2F1FH1+γ−2F2FH2∥2≤2∥F1∥2∥F2∥2=2αβ.

This upper bound is reached if and only if the matrices and have a common eigenvector associted with the maximal eigenvalue.

The following theorem is a variant of a result proven in [2], and constructs a rank one perturbation which makes the matrix singular and therefore gives an upper bound for .

###### Theorem 4.5.

Let be a given minimal passive discrete-time model and assume that we are given a matrix , then the -passivity radius is bounded by

 1/(2αβ)≤ρM(X)≤1/[(1+|^vH^u|)(αβ)]≤1/(αβ),

where and are normalized dominant singular vector pairs of and , respectively :

 F1u=α^u,FH1^u=αu,F2v=β^v,FH2^v=βv.

Moreover, if and are linear dependent, then .

###### Proof.

The proof is analogous to the continuous-time case, see [15]. ∎

Finally, we point out here that in order to maximize the passivity radius of a system model , one should maximize the smallest eigenvalue of the scaled matrix . Let and let us scale the inequality (17) with the matrix   given by

 DsˆW(X,M)Ds+DsE[0ΔSΔHS0]ETDs (20)

where now is an isometry. It then follows that in order to have a perturbation of norm that makes (20) singular, we must have

 λmin(DsˆW(X,M)Ds)≤ρM(X). (21)

This bound expresses that if we want to maximize over all , we should try to maximize The following result shows that normalized passive realizations can be expected to have a larger minimal eigenvalue in the matrix than the corresponding minimal eigenvalue of the non-normalized matrix .

###### Lemma 4.6.

Let then the trace of the matrix

 mindetT≠0trace[diag(T,T−H,Im)(DsˆW(X,M)Ds)diag(TH,T−1,Im)]=trace(DsˆW(I,MT)Ds)

is minimized by the matrices such that , while the determinant remains invariant

###### Proof.

Note that transformation applied to is a congruence transformation which preserves the nonnegativity of its eigenvalues and that the trace of the resulting matrix is , where . It is well known that this is minimized when . The fact that the congruence transformation preserves the determinant identity is obvious. ∎

This lemma suggests that the smallest eigenvalue should increase as the product of all the eigenvalues remains constant and their sum is being minimized, but this is of course not guaranteed in general.

## 5 Maximizing the passivity radius

In this section we discuss another LMI in the matrices with the same domain as , given by

 ˜W(X,M):=⎡⎢⎣XXAXBAHXXCHBHXCDH+D⎤⎥⎦≥0.

It is clear that is congruent to and since , it has the same solution set as . The LMI for the normalized passive realization corresponding to , can be obtained via a congruence transformation as well

 ˜W(I,MT):=⎡⎢ ⎢⎣InATBTAHTInCHTBHTCTDHT+DT⎤⎥ ⎥⎦=⎡⎢⎣T−H000T−H000Im⎤⎥⎦˜W(X,M)⎡⎢⎣cccT−1000T−1000Im⎤⎥⎦≥0.

Let us now consider the following constrained LMI

 ˜W(X,M)≥ξdiag(X,X,2Im). (22)

Then the following Theorem gives a bound on how large we can choose in this LMI.

###### Theorem 5.1.

Let be a minimal realization of a discrete-time passive system, and let be any matrix in . Then there is a unique which is maximal for the matrix inequality (22) to hold, and which is strictly smaller that 1. Moreover,

###### Proof.

It follows from (10) that every is positive definite. Therefore it can be factorized as with , and we can consider the normalized system . It is easy to see that the condition (22) is equivalent to the corresponding LMI condition for the transformed system , which is given by

 ˜W(I,MT)≥ξdiag(In,In,2Im).

The largest value of for which this holds is clearly equal to

 ξ∗(X)=maxξ[ξ|Ds˜W(I,MT)Ds≥ξI2n+m]=λmin(Ds˜W(I,MT)Ds). (23)

Since is positive semi-definite, its diagonal must be non-negative, and thefore can not be larger than 1. Moreover, would imply then that , and would be zero. ∎

###### Remark 5.2.

Note that . From (21) one then obtains the inequality

 λmin(Ds˜W(I,MT)Ds)≤ρMT

which shows the relevance of in the maximization of the passivity radius.

The use of the characterization in terms of the LMI (22) is crucial for the rest of this section. We also point out that Theorem 5.1 applies to all points of , and therefore also of . But we can distinguish between both.

###### Corollary 5.3.

The maximal value of a matrix for a given model equals 0 if is a boundary point of and is strictly positive if and only if is in .

###### Proof.

If is a boundary point of then and also and for those , we thus have . If belongs to , then and . Therefore there exists an such that , and hence . Conversely, if then and which implies that . ∎

In order to maximize , we consider for a given in the matrix

 ˜W(X,Mξ):=⎡⎢ ⎢⎣XXAξXBξAHξXXCHξBHξXCξDHξ+Dξ⎤⎥ ⎥⎦

corresponding to the modified model . It turns out that this matrix satisfies the identity

 (1−ξ)˜W(X,Mξ)=˜W(X,M)−ξ⎡⎢⎣X000X0002Im⎤⎥⎦ (24)

which is crucial for the following Lemma.

###### Lemma 5.4.

For every in and any , the passivity LMIs for the systems and are satisfied. Moreover, the solution set of is included in the solution set of .

###### Proof.

The LMIs for two different values of are related as

 (1−ξ2)˜W(X,Mξ2)=(1−ξ1)˜W(X,Mξ1)−(ξ2−ξ1)diag(X,X,2Im).

Since , we have that and . For that , it then follows that

 ˜W(X,M)≥(1−ξ−)˜W(X,Mξ−)>(1−ξ+)˜W(X,Mξ+)≥(1−ξ∗(X))˜W(X,Mξ∗(x))≥0. (25)

The systems and are thus passive, since their associated LMIs have a nonempty solution set. Now consider any for which . Since is strictly positive, so is and hence . It then follows from (25) that . Hence, the solution set of is included in the solution set of . ∎

Lemma 5.4 implies that for a given , the solution sets of are shrinking with increasing . But we still need to find the matrix that maximizes . We can answer this question by relating this to the passivity of the transfer function of the modified system ,

 Tξ(z):=Cξ(zIn−Aξ)−1Bξ+Dξ,

which is minimal since was assumed to be minimal. It follows from the discussion of Section 2 that this transfer function corresponds to a strictly passive system if and only if the conditions (i) the transfer function is asymptotically stable, and (ii) the matrix function is strictly positive on the unit circle , are satisfied. It has been shown in Section 2 that the zeros of are the eigenvalues of the symplectic matrix

 (26)

which are also the finite eigenvalues of the pencil

 z⎡⎢ ⎢⎣0−In0AHξ00BHξ00⎤⎥ ⎥⎦+⎡⎢ ⎢⎣0AξBξ−In0CHξ0CξDHξ+Dξ⎤⎥ ⎥⎦

or equivalently, those of the pencil

 z⎡⎢⎣0(ξ−1)In0AH00BH00⎤⎥⎦+⎡⎢⎣0AB(ξ−1)In0CH0CDH+D−2ξIm⎤⎥⎦ (27)

and that the realization of is minimal. The algebraic conditions corresponding to strict passivity of are therefore

1. has all its eigenvalues inside the unit disc (stability),

2. the pencil (27) has no eigenvalues on the unit circle (positive realness).

These conditions are phrased in terms of eigenvalues of certain matrices that depend on the parameter . Since eigenvalues are continuous functions of the matrix elements, one can consider limiting cases for the above conditions. As explained in Section 2 the passive transfer functions are limiting cases of strictly passive ones. Those limiting cases correspond to the value of where one of the conditions A1. or A2. does not hold anymore.

###### Theorem 5.5.

Let be a strictly passive and minimal system. Then there is a bounded supremum