On the Geometry of Message Passing Algorithms for Gaussian Reciprocal Processes

03/30/2016
by   Francesca Paola Carli, et al.
University of Cambridge
0

Reciprocal processes are acausal generalizations of Markov processes introduced by Bernstein in 1932. In the literature, a significant amount of attention has been focused on developing dynamical models for reciprocal processes. Recently, probabilistic graphical models for reciprocal processes have been provided. This opens the way to the application of efficient inference algorithms in the machine learning literature to solve the smoothing problem for reciprocal processes. Such algorithms are known to converge if the underlying graph is a tree. This is not the case for a reciprocal process, whose associated graphical model is a single loop network. The contribution of this paper is twofold. First, we introduce belief propagation for Gaussian reciprocal processes. Second, we establish a link between convergence analysis of belief propagation for Gaussian reciprocal processes and stability theory for differentially positive systems.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/14/2016

Modeling and Estimation of Discrete-Time Reciprocal Processes via Probabilistic Graphical Models

Reciprocal processes are acausal generalizations of Markov processes int...
05/03/2009

Gaussian Belief with dynamic data and in dynamic network

In this paper we analyse Belief Propagation over a Gaussian model in a d...
05/26/2021

Convex Combination Belief Propagation Algorithms

We introduce new message passing algorithms for inference with graphical...
07/05/2021

A visual introduction to Gaussian Belief Propagation

In this article, we present a visual introduction to Gaussian Belief Pro...
06/20/2012

Convergent Propagation Algorithms via Oriented Trees

Inference problems in graphical models are often approximated by casting...
02/25/2020

Relaxed Scheduling for Scalable Belief Propagation

The ability to leverage large-scale hardware parallelism has been one of...
10/15/2020

Fundamental Linear Algebra Problem of Gaussian Inference

Underlying many Bayesian inference techniques that seek to approximate t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A –valued discrete-time stochastic process defined over the interval is said to be reciprocal if for any subinterval , the process in the interior of is conditionally independent of the process in given and . From the definition we have that the class of reciprocal processes is larger than the class of Markov processes: Markov processes are necessarily reciprocal, but the converse is not true [15]. Moreover multidimensional Markov random fields reduce in one dimension to a reciprocal process, not to a Markov process.

Reciprocal processes were introduced by Bernstein [1] in 1932, who was influenced by an attempt of Schrödinger [26] at giving a stochastic interpretation of quantum mechanics. After their introduction by Bernstein, reciprocal processes have been studied in detail by Jamison [15, 16, 17], Carmichael, Massé, Theodorescu [8] and Levy, Krener, Frezza [20, 21, 19]. For more recent literature on reciprocal processes see [6, 7], [10, 31] and references therein. As observed in [21] the steady-state distribution of the temperature along a heated ring or a beam subjected to random loads along its length can be modeled in terms of reciprocal processes. Relevance for applications is also attested in [11, 28, 23] where applications to tracking of a ship-trajectory [11]

, estimation of arm movements

[28], and synthesis of textured images [23] are considered.

Starting with Krener’s work [20], a significant amount of attention has been focused on developing state–space models for reciprocal processes. A second order state–space model for discrete–time Gaussian reciprocal processes has been provided in [21]. Modeling in the finite state space case has been analyzed separately in [10] (see also [9]).

Recently [5], probabilistic graphical models for reciprocal processes have been provided, which are distribution–independent. This opens the way to the application of efficient inference algorithms in the machine learning literature (the belief propagation, a.k.a. sum–product algorithm) to solve the smoothing problem for reciprocal processes. Such algorithms are known to converge if the underlying graph is a tree. This is not the case for a reciprocal process, whose associated graphical model is a single loop network. In [5] it has been shown that, for the case of finite–state reciprocal processes, convergence of the belief propagation iteration boils down to the study of asymptotic stability of a linear time invariant positive system, that can be analyzed via the Hilbert metric. This approach is geometric in nature, in that it applies to general linear positive transformations in an arbitrary linear space which map a quite general cone into itself. In a recent paper [12], a generalization of linear positivity, differential positivity, has been introduced. Differential positivity extends linear positivity to the nonlinear setting and, similarly to the latter, restricts the asymptotic behavior of a nonlinear system, a result that is proved by exploiting contraction property of differentially positive systems with respect to the Hilbert metric. The contribution of this paper is twofold. First, we introduce belief propagation for Gaussian reciprocal processes. Second, we establish a link between convergence analysis of belief propagation for Gaussian reciprocal processes, whose underlying iteration is nonlinear on the cone of positive definite matrices, and stability theory of differentially positive systems.

The paper is organized as follows. In Section II, the Hilbert metric is introduced. In Section III we briefly touch upon positive and differentially positive systems and on how the property restricts the asymptotic behavior as a consequence of the contraction of the Hilbert metric. Reciprocal processes and the associated graphical model are reviewed in Section IV. In Section V the belief propagation algorithm is introduced as well as its specialization for a hidden reciprocal model. A link between convergence analysis of belief propagation for Gaussian reciprocal processes and stability theory for differentially positive systems is established in Section VI. Section VII ends the paper.

Ii Hilbert metric

The Hilbert metric was introduced in [13] and is defined as follows. Let be a real Banach space and let be a closed solid cone in that is a closed subset with the properties that (i) the interior of , , is non–empty; (ii) ; (iii) ; (iv) for all . Define the partial order

and for , let

The Hilbert metric induced by is defined by

(1)

For example, if and the cone is the positive orthant, , then and and the Hilbert metric can be expressed as

On the other hand, if is the set of symmetric matrices and is the cone of positive semidefinite matrices, then for , and . Hence the Hilbert metric is

An important property of the Hilbert metric is the following. The Hilbert metric is a projective metric on i.e. it is nonnegative, symmetric, it satisfies the triangle inequality and is such that, for every , if and only if for some . It follows easily that is constant on rays, that is

(2)

A second relevant property is in connection with positive operators. In [2] (see also [4]) it has been shown that linear positive operators contract the Hilbert metric. This can be used to provide a geometric proof of the Perron–Frobenius theory and, in turn, to prove attractiveness properties of linear positive systems. Such a framework, has been recently extended to prove attractiveness properties of a generalization of linear positive systems, differentially positive systems [12]. A brief overview of this theory is the object of the next Section.

Iii Positive and differentially positive systems

A linear operator is positive if it maps a cone into itself, i.e. [4]. For linear dynamical systems , , positivity has the natural interpretation of invariance (and contraction, if the positivity is strict) of the cone along the trajectories of the system. Positivity significantly restricts the behavior of a linear system, as established by Perron–Frobenius theory. Under irreducibility assumption, classical Perron–Frobenius theory

guarantees the existence of a dominant (largest) real eigenvalue for

whose associated eigenvector, the Perron-Frobenius vector

, is the unique eigenvector that belongs to the interior of . As a consequence, the subspace spanned by is an attractor for the linear system, that is, for any vector ,

(3)

A geometric interpretation of Perron–Frobenius theorem has been provided in [2] (see also [4]

) where existence of a fixed point of the projective space for a strictly positive linear map has been proved as a consequence of contraction properties of the Hilbert metric under the action of a strictly positive linear operator. As such, the Perron–Frobenius theorem can be seen as a special case of the contraction mapping theorem. Positivity is at the core of a number of properties of Markov chains, consensus algorithms and large-scale control.

Differential positivity [12] extends linear positivity to the nonlinear setting. A nonlinear system is differentially positive if its linearization along any given trajectory is positive. By generalizing the above–mentioned geometric interpretation of the Perron–Frobenius theory to a differential framework, it has been shown [12] that differential positivity restricts the asymptotic behavior of a system. Once again, this is a consequence of contraction properties of differentially positive mappings with respect to the Hilbert metric. The conceptual picture is that of a cone attached to every point of the state space, defining a cone filed. Contraction of the cone field along the flow eventually constraints the behavior to be one–dimensional. The role of the Perron-Frobenius vector in the linear case is played by the Perron-Frobenius vector field, that is an attractor for the linearized dynamic. Differentially positive systems encompass positive and monotone systems as particular cases. In particular it has been shown in [12] that differentially positive systems reduce to the important class of monotone dynamical systems [27, 14] when the state-space is a linear vector space and when the cone field is constant. In Section VI we will show that the iteration underlying the belief propagation algorithm for Gaussian reciprocal processes is indeed a monotone system, whose convergence can be studied leveraging on stability theory of differentially positive systems.

Iv Reciprocal Processes

In this section, we briefly review the definition of reciprocal process and its description in terms of probabilistic graphical models. The smoothing problem for a reciprocal process with cyclic boundary conditions is also introduced.

Recall that a stochastic process defined on a time interval is said to be Markov if, for any , the past and the future (with respect to ) are conditionally independent given . A process is said to be reciprocal if, for each interval , the process in the interior of and the process in are conditionally independent given and . More formally, a –valued stochastic process on the interval

with underlying probability space

is reciprocal if

(4)

, , where is the

–field generated by the random variables

and is the -field generated by . From the definition it follows that Markov processes are necessarily reciprocal, while the converse is generally not true [15]. Moreover, a multidimensional Markov random field reduces in one dimension to a reciprocal process, not to a Markov process.

In this paper, we consider reciprocal processes defined on the discrete circle with elements (which corresponds to imposing the cyclic boundary conditions , [21, 24]) so that the additional conditional independence relations

hold.

In [5] it has been shown that the reciprocal process on admits a probabilistic graphical model composed of the nodes arranged in a single loop undirected graph as shown in Figure 1.

Fig. 1: Probabilistic graphical model for a reciprocal process on .

We now consider a second process , where, given the state sequence , the are independent random variables, and for all

, the conditional probability distribution of

depends only on . In applications, represents a “hidden” process which is not directly observable, while the observable process represents “noisy observations” of the hidden process. We shall refer to the pair as a hidden reciprocal model. The corresponding probabilistic graphical model is illustrated in Figure 2. The (fixed–interval) smoothing problem is to compute, for all , the conditional distribution of given . One of the most widespread algorithms for performing inference (solving the smoothing problem) in the graphical models literature is the belief propagation algorithm [22, 18, 3], that will be reviewed in the next Section.

Fig. 2: Hidden reciprocal model on .

V Smoothing of Reciprocal Processes via Belief Propagation

In this Section, we first review the belief propagation algorithm [22, 18, 3] and specialize it for a hidden reciprocal model. The particular form that the iteration takes for Gaussian reciprocal processes is discussed in Section VI.

V-a Belief Propagation (a.k.a. sum–product) algorithm

Let be an undirected graphical model over the variables , ,

. From the theory of probabilistic graphical models, we have that the joint distribution associated with

can be factored as

(5)

where denotes a set of maximal cliques in the graph. In the following, we will be interested in pairwise Markov random fields – i.e. a Markov random field in which the joint probability factorizes into a product of bivariate potentials (potentials involving only two variables) – where each unobserved node has an associated observed node . Factorization (5) then becomes

(6)

where the ’s are often referred to as the edge potentials and the ’s are often referred to as the node potentials. The problem we are interested in is finding marginals of the type for some hidden variable .

The basic idea behind belief propagation is to exploit the factorization properties of the distribution to allow efficient computation of the marginals. To fix ideas, consider the graph in Figure 3 and suppose we want to compute the conditional marginal . A naive application of the definition would suggest that can be obtained by summing the joint distribution over all variables except and then normalize

(7)

Nevertheless notice that the joint distribution can be factored as:

(8)

By plugging in factorization (V-A) into equation (7) and interchanging the summations and products order, we obtain

(9)

This forms the basis for the message–passing algorithm.

Algorithm V.1 (Belief propagation)

Let and be two neighboring nodes in the graph. We denote by the message that node sends to node , by the message that sends to , and by the belief at node . The belief propagation algorithm is as follows:

(10a)
(10b)

where denotes the set of neighbors of node and and are normalization constants.

For example, if one considers (V-A), by setting and applying definition (10a) for the messages, (V-A) becomes

which is of the form (10b), where the marginal is computed as the product of incoming messages in the node .

Fig. 3: An example of graphical model with four unobserved nodes and four observed nodes .

Observed nodes do not receive messages, and they always transmit the same vector. The normalization of messages in equation (10a) is not theoretically necessary (whether the messages are normalized or not, the beliefs will be identical) but helps avoiding numerical underflow problems and improving numerical stability of the algorithm. Finally, notice that equation (10a) does not specify the order in which the messages are updated. In this paper we assume that all nodes simultaneously update their messages in parallel. This naturally leads to loopy belief propagation, where the update rule (10a) is applied to graphs that are not a tree (like the single loop network associated to a reciprocal process).

V-B Belief Propagation for general (non necessarily Gaussian) Hidden Reciprocal Models

If the considered graph is the single–loop hidden reciprocal model in Figure 2, expressions (10a) and (10b) for the message and belief updates simplify, each node having only two neighbors. Moreover we can distinguish between two classes of messages, one propagating in the direction of increasing indexes (clockwise) and one propagating in the direction of decreasing indexes (anticlockwise) in the loop. The overall algorithm with parallel scheduling policy is as follows:

Algorithm V.2

[(Parallel) belief propagation algorithm for a hidden reciprocal model]

  1. Initialize all messages to some initial value .

  2. Iteratively apply the updates

    (11a)
    (11b)
  3. For each , compute the marginals

    (12)

For tree-structured graphs, when is larger than the diameter of the tree (the length of longest shortest path between any two vertices of the graph), the algorithm converges to the correct marginal. Convergence analysis of belief propagation for a single–loop network like the one associated to a reciprocal process has been carried out in [29, 30]

, where the finite state space case and the case of Gaussian distributed random variables have been separately analyzed. For Gaussian distributed random variables it has been shown that the belief propagation algorithm converges to the correct mean, and formulas that link the correct covariance and the estimated one have been provided. Intrigued by the similarities observed in

[30] between convergence of finite–state and Gaussian belief propagation on a single loop network (“Although there are many special properties of gaussians, we are struck by the similarity of the analytical results reported here for gaussians and the analytical results for single loop and general distributions reported in [29]”), that in the former case has been shown to be linked to contraction properties of the Hilbert metric [5], in Section VI we revisit convergence analysis for Gaussian belief propagation in the single–loop network and establish a link with stability theory of differentially positive systems, which is also rooted in contraction properties of the Hilbert metric.

Vi Gaussian Belief Propagation for a Hidden Reciprocal Model

For Gaussian distributed variables, messages and beliefs are Gaussians and the belief propagation updates can be written explicitly in terms of means and covariances. In other words, iterations (11a), (11b) on the infinite dimensional space of nonnegative measurable functions become iterations on the finite dimensional spaces (cones) of nonnegative vectors and positive definite matrices. By showing that the latter defines a nonlinear monotone system, we establish a connection between convergence analysis of belief propagation for Gaussian reciprocal processes and stability theory of differentially positive systems.

To start, notice that, for Gaussian distributed variables, the factorization (6) becomes

(13)

where we assume that the ’s are all positive semidefinite and, together with the ’s, can be block partitioned as

and

Denote by () the precision matrix (resp. potential vector) of the message from to , and by () the precision matrix (resp. potential vector) of the belief (estimated marginal posterior) . Also recall that represents the precision matrix associated to the edge potential and () the precision matrix (resp. potential vector) of the node potential . By taking into account the expressions of the node and edge potentials in (VI), for Gaussian distributed random variables, messages (11a), traveling clockwise in the loop, become

(14a)
(14b)

while messages (11b), traveling anticlockwise in the loop, are given by

(15a)
(15b)

The estimated beliefs (estimated posterior mean and covariance) at node are

(16a)
(16b)

from which the estimated mean vector and covariance matrix associated with the posterior marginals are

(17)

Equations (14b), (15b) provide a linear time–varying recursive relation for the computation of message potentials vectors, since they express () as a linear function of the message potential on the “previous” (resp., “successive”) link. On the other hand, both the maps (14a), (15a) are of the form

(18)

i.e. they provide a nonlinear time–varying recursive relation for the computation of the message precision matrix () as a function of the message precision matrix on the “previous” (resp., “successive”) link in the graph.

Theorem VI.1

Suppose that (set of symmetric matrices) and that is invertible. The map (18) is monotone (describes a monotone dynamical system).

Proof:

The map (18) is the composition of the following transformations: (i) , (ii) , (iii) , (iv) , and (v) defined on . In fact we have

The transformations (equiv. ) and the congruence transformation are order preserving (monotone increasing). The inverse map and the map are order reversing (monotone decreasing). Since in the composition there is an even number of order reversing factors, the composite map is order preserving [25].

We now observe the following. Without loss of generality, consider the message that sends to . By the update equation (11a), the message that sends to at time depends on the message that received from at time , so that, in terms of precision matrices of the messages, we can write

(19)

where is the nonlinear transformation (14a). Similarly, the message that sends to at time depends on the message that received from at time

(20)

One can continue expressing each message in terms of the one received from the neighbor until we go back in the loop to : the message that sends to at time is a function of the message that sent to at time

(21)

By putting together (19)–(21), one gets that the message that sends to at a given time step depends on the message that sent to time steps ago. In particular, if we denote by the map

(22)

the precision matrix of the message that sends to satisfy the recursion

(23)

where the map is given by the composition , with maps as in (14a). The map links the precision matrix of the message on the link to the precision matrix of the message on the same link one loop ago, and it is time–invariant (does not vary from the first, to the second, to the third etc. loop) where the time to complete a loop has been taken as the time unit in iteration (23). Moreover such a map is nonlinear and monotone because composition of monotone maps (by Theorem VI.1). By the discussion in Section III it follows that convergence analysis of Gaussian belief propagation for a hidden reciprocal model can be carry out leveraging on stability theory of differentially positive systems. A detailed analysis is the subject of ongoing work.

Vii Conclusions

In this paper we have introduced belief propagation for performing inference for Gaussian reciprocal processes. Intrigued by the similarities observed in [30] between convergence results for finite state space and Gaussian belief propagation on a single loop network, that in the finite state space case has been shown to be linked to contraction properties of the Hilbert metric [5], we have revisited convergence analysis for Gaussian belief propagation in the single–loop network establishing a link with stability theory of differentially positive systems, which is also rooted in contraction properties of the Hilbert metric.

References

  • [1] S Bernstein. Sur les liaisons entre les grandeurs aléatoires. Verh. Internat. Math.-Kongr., Zurich, pages 288–309, 1932.
  • [2] G. Birkhoff. Extensions of Jentzch’s Theorem. Trans. Amer. Math. Soc., 85:219–227, 1957.
  • [3] C. Bishop. Pattern recognition and machine learning. Springer, 2006.
  • [4] P.J. Bushell. Hilbert’s metric and positive contraction mappings in a Banach space. Archive for Rational Mechanics and Analysis, 52(4):330–338, 1973.
  • [5] F. P. Carli. Modeling and estimation of discrete-time reciprocal processes via probabilistic graphical models. arXiv:1603.04419, 2016.
  • [6] F. P. Carli, A. Ferrante, M. Pavon, and G. Picci. A maximum entropy solution of the covariance extension problem for reciprocal processes. IEEE Trans. on Automatic Control, 56(9):1999–2012, 2011.
  • [7] F. P. Carli, A. Ferrante, M. Pavon, and G. Picci. An efficient algorithm for maximum entropy extension of block-circulant covariance matrices. Linear Algebra and its Applications, 439(8):2309–2329, 2013.
  • [8] J-P. Carmichael, J-C. Massé, and R. Theodorescu. Processus gaussiens stationnaires réciproques sur un intervalle. CR Acad. Sci. Paris Sér. I Math, 295(3):291–293, 1982.
  • [9] F. Carravetta. Nearest-neighbor modelling of reciprocal chains. Stochastics: An International Journal of Probability and Stochastics Processes, 80(6):525–584, 2008.
  • [10] F. Carravetta and L. B. White. Modelling and estimation for finite state reciprocal processes. IEEE Transactions on Automatic Control, 57(9):2190–2202, 2012.
  • [11] D. A. Castañon, B.C. Levy, and A.S. Willsky. Algorithms for the incorporation of predictive information in surveillance theory. International journal of systems science, 16(3):367–382, 1985.
  • [12] F. Forni and R. Sepulchre. Differentially positive systems. IEEE Transactions on Automatic Control, 61(2):346–359, 2016.
  • [13] D. Hilbert. Über die gerade linie als kürzeste verbindung zweier punkte. Mathematische Annalen, 46(1):91–96, 1895.
  • [14] M.W. Hirsch and H. Smith. Monotone dynamical systems.

    Handbook of differential equations: ordinary differential equations

    , 2:239–357, 2005.
  • [15] B. Jamison. Reciprocal Processes: The stationary Gaussian case. The Annals of Mathematical Statistics, 41:1624–1630, 1970.
  • [16] B. Jamison. Reciprocal processes. Probability Theory and Related Fields, 30(1):65–86, 1974.
  • [17] B. Jamison. The Markov processes of Schroedinger. Probability Theory and Related Fields, 32(4):323–331, 1975.
  • [18] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
  • [19] A. J. Krener, R. Frezza, and B. C. Levy. Gaussian reciprocal processes and self-adjoint stochastic differential equations of second order. Stochastics and stochastic reports, 34(1-2):29–56, 1991.
  • [20] A.J. Krener. Reciprocal diffusions and stochastic differential equations of second order. Stochastics, 24(4):393–422, 1988.
  • [21] B.C. Levy, R. Frezza, and A.J. Krener. Modeling and estimation of discrete-time gaussian reciprocal processes. IEEE Transactions on Automatic Control, 35(9):1013–1023, 1990.
  • [22] J. Pearl. Probabilistic reasoning in intelligent systems: Networks of plausible reasoning, 1988.
  • [23] G. Picci and F. Carli. Modelling and simulation of images by reciprocal processes. In Proc. of the Tenth International Conference on Computer Modeling and Simulation, UKSIM 2008, pages 513–518, 2008.
  • [24] J. A. Sand. Reciprocal realizations on the circle. SIAM J. Control and Optimization, 34:507–520, 1996.
  • [25] E. Schechter. Classical and nonclassical logics: an introduction to the mathematics of propositions. Princeton University Press, 2005.
  • [26] E. Schrödinger. Sur la théorie relativiste de l’électron et l’interprétation de la mécanique quantique. 2(4):269–310, 1932.
  • [27] H. Smith. Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems, volume 41 of Mathematical Surveys and Monographs. American Mathematical Society, 2008.
  • [28] L. Srinivasan, U.T. Eden, A.S. Willsky, and E.N. Brown. A state-space analysis for reconstruction of goal-directed movements using neural signals. Neural computation, 18(10):2465–2494, 2006.
  • [29] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural computation, 12(1):1–41, 2000.
  • [30] Y. Weiss and W.T. Freeman. Correctness of belief propagation in gaussian graphical models of arbitrary topology. Neural computation, 13(10):2173–2200, 2001.
  • [31] L.B. White and F. Carravetta. Optimal smoothing for finite state hidden reciprocal processes. IEEE Transactions on Automatic Control, 56(9):2156–2161, 2011.