We consider realizations of linear discrete-time dynamical systems for which the associated transfer function is passive. Such transfer functions play a fundamental role in systems and control theory: they represent e. g., spectral density functions of stochastic processes, show up in spectral factorizations and are also related to discrete-time algebraic Riccati equations. Passive transfer functions can be described using convex sets, and this property has lead to the extensive use of convex optimization techniques in this area .
In this paper we show that in the set of possible realizations of a given passive transfer function, there is a subset that maximizes robustness, in the sense that their so-called passivity radius is nearly optimal. Related results for continuous-time systems were already obtained in a companion paper . Here we consider the discrete-time system
where , , and
are vector-valued sequences denoting, respectively, theinput, state, and output of the system. Denoting real and complex -vectors ( matrices) by , (, ), respectively, the coefficient matrices satisfy , , , and .
We restrict ourselves to systems which are minimal, i. e. the pair is controllable (for all , ), and the pair is observable (i. e. is controllable). Here, the Hermitian (or conjugate) transpose (transpose) of a vector or matrix is denoted by (
) and the identity matrix is denoted byor if the dimension is clear. We furthermore require that input and output dimensions are equal to .
Passive systems are well studied in the continuous-time case, starting with the works [23, 24]. Here we consider the equivalent definition in the discrete-time case and derive so-called normalized passive realizations that could be considered as “discrete-time port-Hamiltonian systems”. Similar attempts were already made in the literature ,,.
The paper is organized as follows. After going over some preliminaries in Section 2, we characterize in Section 3 what we called normalized passive realizations of a discrete-time passive system. We then show in Section 4
their relevance in estimating the passivity radius of sicrete-time passive systems and construct in Section5 realizations with nearly optimal robustness margin for passivity. In Section 7 we describe an algorithm to compute this robustness margin. In Section 8 we show how to use these ideas to estimate the distance to the set of discrete-time passive systems.
2 Passive systems
Throughout this article we will use the following notation. We denote the set of Hermitian matrices in by . Positive definiteness (semi-definiteness) of is denoted by (). The real and imaginary parts of a complex matrix are written as and , respectively, and is the imaginary unit. We consider functions over , which is a vector space if considered as a real subspace of .
The concept of passivity is well studied. We briefly recall some important properties following , and refer to the literature for proofs and for a more detailed survey. Consider a discrete-time system (1) with minimal state-space model
and transfer function and define the complex analytic function of :
which coincides with the Hermitian part of on the unit circle:
The transfer function is called strictly positive-real if for all and it is called positive-real if for all ; is called asymptotically stable
if the eigenvalues ofare in the open unit disc, and it is called stable if the eigenvalues of are in the closed unit disc, with any eigenvalues occurring on the unit circle being semi-simple. With these two properties, then is called strictly passive if it is strictly positive-real and asymptotically stable and it is called passive if it is positive real and stable.
The transfer function is the Schur complement of the so-called system pencil
and if the model is minimal, then the finite generalized eigenvalues of are the finite zeros of . The following equivalence transformation, using an arbitrary matrix , leaves the Schur complement, and hence also the transfer function , unchanged
Let us define the submatrix of (3), given by
which we will also denote as when the underlying model is obvious from the context. Then it follows by simple algebraic manipulation that
and that is positive real if and only if there exists such that the Linear Matrix Inequality (LMI)
holds. Moreover, is stable if and only if the matrix in this LMI is also positive definite. We will therefore make frequent use of the following sets
An important subset of are those solutions to (5) for which the rank of is minimal (i. e. for which ). If is invertible, then the minimum rank solutions in are those for which , which in turn is the case if and only if the Schur complement of in is zero. This Schur complement is associated with the discrete-time algebraic Riccati equation (ARE)
Solutions to (7) produce a spectral factorization of , and each solution corresponds to a invariant subspace spanned by the columns of that remains invariant under the multiplication with the matrix
i. e. satisfies where the so-called closed loop matrix is defined as with . Such a subspace is called a Lagrangian invariant subspace and the matrix has a symplectic structure (see e.g., ,). Each solution of (7) can also be associated with an extended Lagrangian invariant subspace for the pencil , spanned by the columns of . In particular, satisfies
If is singular then more complicated constructions are necessary, see .
In the continuous-time case, the definition of a passive systems has its origin in network theory, but its formal definition is associated with the existence of a storage function and a particular dissipation inequality. The equivalent concept for the discrete-time case again follows from the LMI (5). If we define the vector as the stacked vector of the state above the input , and construct the inner product , then we obtain the inequality
Using the quadratic storage function this yields a dissipation inequality
that is similar to the one of the continuous-time formulation. It follows from the continuous-time literature  and the bilinear transformation between continuous-time and discrete-time systems  that if the system of (2) is minimal, then the LMI (5) has a solution if and only if is a passive system. Moreover, the solutions of (5) also satisfy the matrix inequalities
The matrices satisfying the matrix inequalities (10) also form a convex set, which we call . We thus have the following inclusions
which implies that all matrices in the sets and are bounded. Notice also that the block in the LMI (4),(6) is a discrete-time Lyapunov equation with . This implies that is asymptotically stable if and is stable if , see also . It is also known that if the system is strictly passive, meaning that for the whole unit circle, then .
The bilinear transformation between continuous-time and discrete-time systems preserves the solution sets and as well as the solutions and of the Riccati equation. It was shown, see e. g., , that the set has a nonempty interior if and only if . Since is a subset of it also follows has an empty interior when is singular.
3 Normalized passive realizations
A special class of realizations of discrete-time passive systems, are the ones associated to a normalized storage function .
A normalized passive system has the state-space form (1) where the system matrices satisfy the matrix inequality
We now show that every passive system has an equivalent normalized passive realization. Consider a minimal state-space model of a passive linear time-invariant system and let be a solution of the LMI (5). We then use a (Cholesky like) factorization which implies and define a new realization
which expresses that the transformed realization is now normalized. Notice that the factor is unique up to a unitary factor since . This unitary factor does not affect the normalization constraint, but we can choose it to put in a special coordinate system. Notice that the inequality implies that
is contractive and has a singular value decompositionwhere . The additional unitary similarity transformation will then yield a new normalized coordinate system where, in addition, , which is a polar decomposition with a positive semidefinite Hermitian factor that is diagonal and satisfies .
Even after the normalization, there is typically still a lot of freedom in the representation of the system, since we could have used any matrix from the set to normalize our realization. In the remainder of this paper, we will focus on normalized passive realizations. The freedom remaining is thus the choice of the matrix from , which, as we will see, can be used to make the representation more robust, i.e., less sensitive to perturbations. The remainder of this paper will deal with the question of how to make use of this freedom in the state space transformation to determine a ’good’ or ‘nearly optimal’ normalized realization.
4 The passivity radius
Our goal is to achieve ‘good’ or ‘nearly optimal’ normalized realizations of a passive system. A natural measure for this is a large passivity radius , which is the smallest perturbation (in an appropriate norm) to the coefficients of a model that causes the perturbed system to loose this property.
Once we have determined a solution to the LMI (5), we can determine the normalized representations as discussed in Section 3. For each such representation we can determine the passivity radius and then choose the solution which is most robust under perturbations of the model parameters . This is a suitable approach for perturbation analysis, since as soon as we fix , we will see that we can solve for the smallest perturbation to our model that makes . To measure the size of the perturbation of a state space model we will use the Frobenius norm or the 2-norm of the matrix defined as
and we use also the notion of -passivity radius, which was introduced in , and gives a bound for the usual passivity radius.
For the -passivity radius is defined as
Note that in order to compute for the model , we must have a point , since must be positive definite to start with and also should be positive definite to obtain a state-space transformation from it. The following relation between the -passivity radius and the usual passivity radius was already presented in .
The passivity radius for a given model satisfies
We now provide an exact formula for the -passivity radius based on a one parameter optimization problem. For this, we point out that the condition is equivalent to the condition
which is now an LMI in the unknown parameters of (for a fixed ). Setting
and using the matrix in (14), this inequality can be written as the structured LMI
as long as the system is still passive. In order to violate this condition, we need to find the smallest such that the determinant of (17) becomes 0. Since is positive definite, we can then construct its Cholesky factorization . The matrix in (17) will become singular when the matrix
becomes singular. The following theorem, is analogous to results obtained for continuous-time systems [2, 15, 18], and we therefore omit the proof. It gives for this kind of problem the minimum norm perturbation both in Frobenius norm and in 2-norm.
Consider the matrices in (16) and the pointwise positive semidefinite matrix function
in the real parameter . Then the largest eigenvalue is a unimodal function of (i.e. it is first monotonically decreasing and then monotonically increasing with growing ). At the minimizing value , has an eigenvector
has an eigenvector, i.e.
where . The minimum norm perturbation is of rank and is given by . It has norm both in 2-norm and in Frobenius norm.
A simple bound for can also be obtained, as pointed out in  for the continuous-time case. The proof is essentially the same and is therefore omitted.
Consider the matrices , , and in Theorem 4.3, and define and . Then the norm of is also the norm of , and
This upper bound is reached if and only if the matrices and have a common eigenvector associted with the maximal eigenvalue.
The following theorem is a variant of a result proven in , and constructs a rank one perturbation which makes the matrix singular and therefore gives an upper bound for .
Let be a given minimal passive discrete-time model and assume that we are given a matrix , then the -passivity radius is bounded by
where and are normalized dominant singular vector pairs of and , respectively :
Moreover, if and are linear dependent, then .
The proof is analogous to the continuous-time case, see . ∎
Finally, we point out here that in order to maximize the passivity radius of a system model , one should maximize the smallest eigenvalue of the scaled matrix . Let and let us scale the inequality (17) with the matrix given by
where now is an isometry. It then follows that in order to have a perturbation of norm that makes (20) singular, we must have
This bound expresses that if we want to maximize over all , we should try to maximize The following result shows that normalized passive realizations can be expected to have a larger minimal eigenvalue in the matrix than the corresponding minimal eigenvalue of the non-normalized matrix .
Let then the trace of the matrix
is minimized by the matrices such that , while the determinant remains invariant
Note that transformation applied to is a congruence transformation which preserves the nonnegativity of its eigenvalues and that the trace of the resulting matrix is , where . It is well known that this is minimized when . The fact that the congruence transformation preserves the determinant identity is obvious. ∎
This lemma suggests that the smallest eigenvalue should increase as the product of all the eigenvalues remains constant and their sum is being minimized, but this is of course not guaranteed in general.
5 Maximizing the passivity radius
In this section we discuss another LMI in the matrices with the same domain as , given by
It is clear that is congruent to and since , it has the same solution set as . The LMI for the normalized passive realization corresponding to , can be obtained via a congruence transformation as well
Let us now consider the following constrained LMI
Then the following Theorem gives a bound on how large we can choose in this LMI.
Let be a minimal realization of a discrete-time passive system, and let be any matrix in . Then there is a unique which is maximal for the matrix inequality (22) to hold, and which is strictly smaller that 1. Moreover,
It follows from (10) that every is positive definite. Therefore it can be factorized as with , and we can consider the normalized system . It is easy to see that the condition (22) is equivalent to the corresponding LMI condition for the transformed system , which is given by
The largest value of for which this holds is clearly equal to
Since is positive semi-definite, its diagonal must be non-negative, and thefore can not be larger than 1. Moreover, would imply then that , and would be zero. ∎
Note that . From (21) one then obtains the inequality
which shows the relevance of in the maximization of the passivity radius.
The use of the characterization in terms of the LMI (22) is crucial for the rest of this section. We also point out that Theorem 5.1 applies to all points of , and therefore also of . But we can distinguish between both.
The maximal value of a matrix for a given model equals 0 if is a boundary point of and is strictly positive if and only if is in .
If is a boundary point of then and also and for those , we thus have . If belongs to , then and . Therefore there exists an such that , and hence . Conversely, if then and which implies that . ∎
In order to maximize , we consider for a given in the matrix
corresponding to the modified model . It turns out that this matrix satisfies the identity
which is crucial for the following Lemma.
For every in and any , the passivity LMIs for the systems and are satisfied. Moreover, the solution set of is included in the solution set of .
The LMIs for two different values of are related as
Since , we have that and . For that , it then follows that
The systems and are thus passive, since their associated LMIs have a nonempty solution set. Now consider any for which . Since is strictly positive, so is and hence . It then follows from (25) that . Hence, the solution set of is included in the solution set of . ∎
Lemma 5.4 implies that for a given , the solution sets of are shrinking with increasing . But we still need to find the matrix that maximizes . We can answer this question by relating this to the passivity of the transfer function of the modified system ,
which is minimal since was assumed to be minimal. It follows from the discussion of Section 2 that this transfer function corresponds to a strictly passive system if and only if the conditions (i) the transfer function is asymptotically stable, and (ii) the matrix function is strictly positive on the unit circle , are satisfied. It has been shown in Section 2 that the zeros of are the eigenvalues of the symplectic matrix
which are also the finite eigenvalues of the pencil
or equivalently, those of the pencil
and that the realization of is minimal. The algebraic conditions corresponding to strict passivity of are therefore
has all its eigenvalues inside the unit disc (stability),
the pencil (27) has no eigenvalues on the unit circle (positive realness).
These conditions are phrased in terms of eigenvalues of certain matrices that depend on the parameter . Since eigenvalues are continuous functions of the matrix elements, one can consider limiting cases for the above conditions. As explained in Section 2 the passive transfer functions are limiting cases of strictly passive ones. Those limiting cases correspond to the value of where one of the conditions A1. or A2. does not hold anymore.
Let be a strictly passive and minimal system. Then there is a bounded supremum