I Introduction
This paper is about the asymptotic behavior of the Kalman filter [11]
. The Kalman–Bucy filter merges predictions from a trusted model of the dynamics of the system with incoming measurements in order to get an accurate, real–time estimate of the unknown internal state of the system. The estimation relies on the computation of a positive semidefinite matrix
, the covariance of the estimation error. The difference equation verified by is a discrete–time algebraic Riccati equation. Kalman showed that, for a linear time–invariant system, under detectability conditions, the Riccati equation converges to a fixed point, which is unique under certain stabilizability conditions ([10], see also [9]). The classical convergence analysis requires several steps, showing that the error covariance is upper bounded, that, with zero initial value, it is monotone increasing, so that it admits a limit, and then proving that the corresponding filter is stable and that the limit is the same for all initial covariances.In [4] Bougerol proposed a more geometric convergence analysis by showing that the discrete–time Riccati iteration is a contraction for the Riemannian metric associated to the cone of positive definite matrices. Other authors elaborated along these lines (see e.g. [16, 19, 13, 7]), showing that the Riccati operator is a contraction with respect to other metrics (e.g. Thompson’s metric) and providing explicit formulas for the contraction coefficients.
In this paper, we seek to relate the convergence of the Kalman iteration, and, in particular, of the Riccati flow, to the contraction of the (projective) Hilbert metric under the action of a nonlinear map on the space of positive measurable functions (as opposed to the action of the nonlinear Riccati operator on the space of positive definite matrices). The choice of Hilbert metric seems to be particularly sensible in this context since, thanks to its property of being invariant under scaling, it allows to study the convergence of a nonlinear iteration via the analysis of a linear one. To this end, the Kalman iteration is seen as a specialization for Gaussian distributions of filtering algorithms for general hidden Markov models (HMMs) and the observation is made that the underlying iteration of these general filtering algorithms never expands the Hilbert metric. This approach is more general than the analysis of the Riccati iteration but at the price of a weaker result, since only non expansiveness of the Hilbert metric can be shown. The gap between non expansiveness and contraction is certainly a non trivial one in the infinite dimensional space of probability distributions. Using the Hilbert metric, convergence results have been proved in
[1], [15] (see also [14] for some results concerning HMMs with finite state space) where problems arising from non–compact state spaces or heavy tailed distributions have been considered. We envision that this approach can open the way to a geometric analysis of filtering algorithms on general graphical models, e.g., of arbitrary topology.The paper is organized as follows. Section II and III establish common notation by introducing the Hilbert metric and the Kalman filter iteration. In Section IV we show that the nonlinear iteration underlying filtering algorithms for general HMMs does not expand the Hilbert metric on the space of positive measurable functions. In Section V we show that the Kalman iteration can indeed be seen as a particularization for Gaussian distributions of forward filtering algorithms for general HMMs and as such does not expand the Hilbert metric on the space of positive measurable functions endowed with the Hilbert metric. Section VI discusses convergence. Section VII ends the paper.
Notation. Throughout the paper if is a cone, we denote by the interior of . In particular we will denote by () the cone of positive semidefinite (definite) matrices while () will be used to denote the cone of nonnegative (positive) measurable functions with respect to a suitable –algebra.
Ii Hilbert metric
The Hilbert metric was introduced in [8]. Birkhoff [3] (see also [5]) showed that strict positivity of a mapping implies contraction in the Hilbert metric, paving the way to many contraction–based results in the literature of positive operators. The Hilbert metric is defined as follows. Let be a real Banach space and let be a closed solid cone in that is a closed subset with the properties that (i) is non–empty; (ii) ; (iii) ; (iv) for all . Define the partial order
and for , let
The Hilbert metric induced by is defined by
(1) 
For example, if and the cone is the positive orthant, , then and and the Hilbert metric can be expressed as
On the other hand, if is the set of symmetric matrices and is the cone of positive semidefinite matrices, then for , and . Hence the Hilbert metric is
In the following, we will be interested to positive operators on finite measures. In this context, the Hilbert metric is defined as follows. Let be a complete separable metric space and let be the –algebra of Borel subsets of . Moreover let
be the vector space of finite signed measure on
and be the set of finite nonnegative measures on . We recall that two elements are called comparable if for suitable positive scalars . The Hilbert metric on is defined asAn important property of the Hilbert metric is the following. The Hilbert metric is a projective metric on i.e. it is nonnegative, symmetric, it satisfies the triangle inequality and is such that, for every , if and only if for some . It follows easily that is constant on rays, that is
(2) 
Hilbert metric and positive mappings
In this section, we review contraction properties of positive operators with respect to the Hilbert metric. We recall that a map is said to be positive; a map is said to be strictly positive. If is a strictly positive linear map we denote by
(3) 
the contraction ratio of and by
(4) 
its projective diameter. Contraction properties of positive operators with respect to the Hilbert metric are established in the following theorem [3, 5, 12].
Theorem II.1
If , then the following holds

if is a positive linear map on , then
, i.e. the Hilbert metric contracts weakly under the action of a positive linear transformation.

[Birkhoff, 1957] If is a strictly positive linear map in , then
(5)
Let denote the unit sphere in and let be the metric space . Then, by combining Theorem II.1 (ii), with the Banach contraction mapping theorem, the following generalization of the Perron–Frobenius theorem holds: if and if the metric space
is complete, then there exists a unique positive eigenvector of
in .Iii Kalman filter and the Riccati operator
In this section, we briefly introduce the Kalman filter iteration, that is analyzed later on in Section V where an alternative derivation is also provided.
Let us consider a linear dynamical system
(6a)  
(6b) 
where and
are mutually uncorrelated white noise Gaussian processes with variance
and , respectively, i.e.(7) 
and with initial condition
(8) 
such that
(9) 
The Kalman filter recursion consists of the following steps:
Time update (“Predict”) step:
(10)  
(11) 
Measurement update (“Correct”) step:
(12)  
(13)  
(14) 
and is initialized at , . Equivalently, the following one–step expression for the a posteriori state estimate and covariance holds
(15)  
(16) 
where is the nonlinear map
(17) 
in (17) can be written as
(18) 
This equation is called the discrete Riccati equation. In the literature, convergence of the Kalman iteration has been studied by proving that the discrete Riccati operator contracts suitable metrics (e.g. the Riemannian metric [4], the Thompson’s part metric [16]) on the set of positive definite matrices. In the following, we propose to study convergence of the Kalman iteration by directly analyzing an equivalent iteration on the space of positive measurable functions. This equivalent iteration will be introduced and discussed in the following section.
Iv Non–expansiveness of the Filtering Recursion in Projective Spaces
In this section, we introduce the filtering algorithm for general hidden Markov models and we show that the map underlying the main iteration does not expand the Hilbert metric on the cone of positive measurable functions. Note that some authors use the term hidden Markov model exclusively for the case where takes values in a finite state space. In this paper, following e.g. [6], when referring to a hidden Markov model we also intend to include models with continuous state space; such models are also referred to as state–space models in the literature.
Problem statement
In the broadest sense of the word, a hidden Markov model is a Markov process that is split into two components: an observable component and an unobservable or “hidden” component. That is, a hidden Markov model is a Markov process on the state space , where we presume that we have a way of observing , but not .
In simple cases such as discrete–time, countable state space models, it is common to define hidden Markov models by using the concept of conditional independence. It turns out that conditional independence is mathematically more difficult to define in general settings (in particular, when the state space of the Markov process is not countable – the case we are interested in), so a different route is adopted (see [6] for details). To this aim, we define the transition kernel (the parallel of the transition matrix for countable state spaces).
Definition IV.1
(Transition kernel)
A kernel from a measurable space to a measurable space
is a map such that
(i) for all , is a measure on ;
(ii) for all , the map is measurable.
If for every , then is called a transition kernel.
We next consider an –valued stochastic process , i.e., a collection of
–valued random variables on a common underlying probability space
, where is some measure space. The process is Markov if, for every time , there exists a transition kernel such thatfor every , . If for every , then the Markov process is called homogeneous. For simplicity of exposition, from now on we will consider homogeneous Markov processes, though the theory we are about to develop does not rely on this assumption. A hidden Markov model is a (only partially observed) Markov process, whose transition kernel has a special structure, namely it is such that both the joint process and the marginal unobservable process are Markov. Formally:
Definition IV.2
(Hidden Markov Model) Let and be two measurable spaces and let and denote a transition kernel on and a transition kernel from to . Consider the transition kernel on the product space defined by
for . The Markov process with transition kernel and initial probability measure on , is called a hidden Markov model.
A hidden Markov model is completely determined by the initial measure and its transition kernel (equivalently by and ), formally:
Proposition IV.1
Let be a hidden Markov model on with transition kernel , observation kernel , and initial measure . Then for every bounded measurable function ,
(19) 
In the following, we are interested in the filtering problem for HMM, namely the problem of computing the sequence of conditional distribution of given . The filtering, as well as the related smoothing and prediction problems, have their origin in the work of Wiener, who was interested in stationary processes. In the more general setting of hidden Markov models, early contributions are the works of Stratonovich, Shiryaev, Baum, Petrie and coworkers [18, 17, 2], see also [6] for a recent monograph.
Filtering algorithm
Assume that both and are absolutely continuous with respect to the Lebesgue measure (in the next section we will particularize to the case of Gaussian distributions) with transition density functions and respectively. In terms of transition densities, the filtering problem can be solved as follows.
Theorem IV.1 (Forward filtering recursion)
We denote by
the probability density function
and let
Then can be recursively expressed in terms of as follows
(20) 
with iteration initialized at
(21) 
The iteration (20) defines a time–varying dynamical system over the cone of nonnegative measurable functions with respect to the product –algebra . The following equivalent two–step formulation holds.
Remark IV.1
[Two–step formulation of the filtering recursion] The filtering recursion (20) is often split into two steps.

prediction step: in which the onestepahead predictive density is computed
(22) 
update step: in which the observed data from time is absorbed yielding to the filtering density
(23)
Non–expansiveness in projective space
First of all, notice that the nonlinear map in (20), say , is the composition of a linear one (at the numerator) and a positive scaling, i.e. we can write
where
(24) 
with and transition densities associated to the transition and observation kernels and , respectively. The next theorem draws the consequences of the fact that the map takes nonnegative measurable functions into nonnegative measurable functions.
Theorem IV.2
The map in (24) does not expand the Hilbert metric, i.e.
The map is the composition of (i) and (ii) . The maps and are positive linear and as such they do not expand the Hilbert metric (see Theorem II.1, (i)). The thesis follows since the composition of nonexpansive operators is nonexpansive.
V Kalman filtering as Forward Filtering Recursion
The classical derivation of Kalman filter relies on an argument based on projections onto spaces spanned by random variables. As an alternative, the Kalman iteration can be seen as a specialization of the filtering algorithm in Theorem IV.1 for Gaussian distributions. This fact by itself is known in the literature (see e.g. [6]). In this section, first we briefly review this alternative derivation of Kalman filtering. This, combined with the (weak) contraction result of Theorem IV.2, let us conclude that the Kalman iteration does not expand the Hilbert metric. Convergence of the Kalman iteration is discussed in Section VI.
Before getting started, we observe that the linear dynamical system (6)–(9) is indeed equivalent to a hidden Markov model as specified by (IV.1) with initial, transition and emission probability densities, for , given by
(25)  
(26)  
(27) 
Also we recall that given the prior and likelihood
(28)  
(29) 
the posterior and normalization constant are given by
(30)  
(31) 
with
(32)  
(33) 
The next proposition connects the Kalman filter algorithm to the filtering recursion described in Section IV.
Proposition V.1
Let

update step: By (23), is given by
Now is Gaussian with mean and covariance . is also Gaussian. We denote by and its mean and covariance. By virtue of (31) we get
with
(34) (35) from which the expressions (12)–(13) for the a posteriori state estimate and covariance can be recovered via the matrix inversion lemma.
Vi On strict contractiveness of the Kalman iteration
So far, we have shown that the time–varying nonlinear operator that underlies the Kalman iteration does not expand the Hilbert metric. Proving convergence of the Kalman iteration indeed amounts to prove that such iteration strictly contracts the Hilbert metric. As observed in Section IV, the map (20) is the composition of a linear positive map and a positive scaling. By the scaling invariant property of the Hilbert metric, it follows that convergence analysis can concentrate only on the linear numerator of . By Theorem II.1 (ii), a sufficient condition for a strictly positive linear operator to be a contraction is to have a finite projective diameter. At this point, one may observe that even the Hilbert distance between two Gaussians with the same variance and different mean may tend to infinity (a general discussion that takes into account problems arising from the use of the Hilbert metric with non–compact state space and heavy tailed distributions is contained in [1]). Proving strict contraction usually requires to exploit that the map is time–varying, and showing that the map contracts over a uniform time–horizon as opposed to at each time instant. For iterations on the finite dimensional space of covariance matrices, this is the place where the observability and controllability conditions enter the analysis. Our hope is that similar conditions apply to more general situations that the one covered by the Kalman filter and that this general approach will find novel applications in the analysis of filtering algorithms on general graphical models.
Vii Conclusion
As an attempt to generalize the contraction–based convergence analysis of the Kalman filter, we have interpreted the contraction result of Bougerol in the space of positive definite (covariance) matrices as a specialization of the non–expansiveness of the general filtering recursion for hidden Markov models in the space of positive measurable functions. In spite of the obstacles to showing a finite projective diameter in this infinite dimensional space, we feel that this approach is worth revisiting in the convergence analysis of filtering algorithms on general graphical models (arbitrary topology and/or on different spaces of distributions). This is the topic of ongoing research.
References

[1]
R. Atar and O. Zeitouni.
Exponential stability for nonlinear filtering.
Annales de l’IHP Probabilités et Statistiques, 33(6):697–725, 1997. 
[2]
L.E. Baum, T. Petrie, G. Soules, and N. Weiss.
A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains.
The annals of mathematical statistics, pages 164–171, 1970.  [3] G. Birkhoff. Extensions of Jentzsch’s theorem. Transactions of the American Mathematical Society, pages 219–227, 1957.
 [4] P. Bougerol. Kalman filtering with random coefficients and contractions. SIAM Journal on Control and Optimization, 31(4):942–959, 1993.
 [5] P.J. Bushell. Hilbert’s metric and positive contraction mappings in a Banach space. Archive for Rational Mechanics and Analysis, 52(4):330–338, 1973.
 [6] O. Cappé, E. Moulines, and T. Rydén. Inference in Hidden Markov Models. Springer Verlag, New York, 2005.
 [7] S. Gaubert and Z. Qu. The contraction rate in Thompson’s part metric of orderpreserving flows on a cone–application to generalized Riccati equations. Journal of Differential Equations, 256(8):2902–2948, 2014.
 [8] D. Hilbert. Über die gerade linie als kürzeste verbindung zweier punkte. Mathematische Annalen, 46(1):91–96, 1895.
 [9] A. H. Jazwinski. Stochastic processes and filtering theory. Academic Press, 1970.
 [10] R.E. Kalman. New methods in Wiener filtering theory. In Proceedings of the First Symposium on Engineering Applications of Random Function Theory and Probability. John Wiley & Sons, New York, 1963.
 [11] R.E. Kalman and R. S. Bucy. New results in linear filtering and prediction theory. Journal of Basic Engineering, 83(1):95–108, 1961.
 [12] E. Kohlberg and J.W. Pratt. The contraction mapping approach to the Perron–Frobenius theory: Why Hilbert’s metric? Mathematics of Operations Research, 7(2):198–210, 1982.
 [13] J. Lawson and Y. Lim. A Birkhoff contraction formula with applications to Riccati equations. SIAM Journal on Control and Optimization, 46(3):930–951, 2007.
 [14] F. Le Gland and L. Mevel. Exponential forgetting and geometric ergodicity in hidden markov models. Mathematics of Control, Signals and Systems, 13(1):63–93, 2000.
 [15] F. Le Gland and N. Oudjane. Stability and uniform approximation of nonlinear filters using the Hilbert metric and application to particle filters. The Annals of Applied Probability, 14(1):144–187, 2004.
 [16] C. Liverani and M.P. Wojtkowski. Generalization of the Hilbert metric to the space of positive definite matrices. Pacific J. Math, 166(2):339–355, 1994.
 [17] A.N. Shiryaev. On stochastic equations in the theory of conditional Markov process. Theory of probability and its applications, 11(1):179–184, 1966.
 [18] R.L. Stratonovich. Conditional Markov processes. Theory of Probability and Its Applications, 5(2):156–178, 1960.
 [19] M.P. Wojtkowski. Geometry of Kalman filters. J. Geom. Symmetry Phys, 2007.
Comments
There are no comments yet.