# A Bayesian Approach to Sparse plus Low rank Network Identification

We consider the problem of modeling multivariate time series with parsimonious dynamical models which can be represented as sparse dynamic Bayesian networks with few latent nodes. This structure translates into a sparse plus low rank model. In this paper, we propose a Gaussian regression approach to identify such a model.

• 7 publications
• 12 publications
10/26/2020

### Deep Low-rank plus Sparse Network for Dynamic MR Imaging

In dynamic MR imaging, L+S decomposition, or robust PCA equivalently, ha...
07/19/2013

### Automated Defect Localization via Low Rank Plus Outlier Modeling of Propagating Wavefield Data

This work proposes an agnostic inference strategy for material diagnosti...
04/30/2012

### Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Given the superposition of a low-rank matrix plus the product of a known...
05/29/2019

### Learning Bayesian Networks with Low Rank Conditional Probability Tables

In this paper, we provide a method to learn the directed structure of a ...
11/04/2019

### Seasonally-Adjusted Auto-Regression of Vector Time Series

We present a simple algorithm to forecast vector time series, that is ro...
05/16/2021

### Sparse system identification by low-rank approximation

In this document, some general results in approximation theory and matri...
06/27/2012

### Conditional Sparse Coding and Grouped Multivariate Regression

We study the problem of multivariate regression where the data are natur...

## I Introduction

This paper deals with the network identification problem of a multivariate stochastic process of high dimension. Important applications can be found in many fields, for instance in econometrics, [1, 5, 8, 21], social network analysis [13], system biology, [20] and so on.

Let be a stochastic process whose components (or variables) are manifest, i.e. can be directly measured. We define as network of this manifest process a directed graph wherein nodes denote the variables and edges encode conditional Granger causality relations among these variables, [12]. More specifically, there is an edge from node (i.e. variable ) to node (i.e. variable ) if the past of is needed to predict conditioned on the past of all the other , . Sparse graphs (i.e. with few edges) represent a concise and interpretable way to describe the relations among the variables of the manifest process, [7]. However, this modeling assumption does not exploit nor encode the fact that the manifest process may often be thought as driven by a low dimensional latent process, i.e. a stochastic process whose components cannot be directly measured.

In this paper we formulate a new network identification problem which takes into account the presence of this low dimensional latent process and the variables of the manifest process Granger causes each other mostly through the latent variables. The corresponding network has a two layer structure: one layer denotes the manifest variables and the other one the latent variables. The presence of few latent variables should drastically reduce the edges in the manifest layer, therefore increasing the degree of conciseness and robustness of the model. Finally, it turns out that a model described by this network has a sparse plus low rank (S+L) structure. More precisely, the low rank part depends on the number of latent variables, whereas the sparse one on the number of conditional Granger causality relations among the manifest variables.

When it comes to developing and identification algorithm, which maps measured data into an estimated dynamical model, it is fair to say that the prediction error method (PEM) is a consolidate paradigm in system identification

[15, 19]

. In the traditional setting, candidate models are described in fixed parametric model structures, e.g. ARMAX, whose complexity is determined using cross validation or information based criteria. Regularization has been recently introduced in the PEM framework, see

[16, 17, 6, 18], as an alternative approach to control complexity of the estimated models. This latter class of methods start with a large enough (in principle infinite dimensional) model class; the inverse (ill-posed) problem of determining a specific model from a finite set of measured data can be made into a well posed problem using a penalty term, whose duty is to select models with specific features. In the Bayesian view, this is equivalent to the introduction of an a prioriprobability (i.e. prior) on the model to estimate. For instance, the prior should account the fact that the model is Bounded Input Bounded Output (BIBO) stable, [17].

The identification algorithm we propose belongs to this latter class of methods, where the predictor impulse response is estimated in the framework of Gaussian regression. The predictor impulse response is modeled as a zero mean Gaussian random vector. Its covariance matrix, referred to as kernel matrix, encodes the

a priori information. In this case, the a priori information is that the predictor impulse responses are BIBO stable and that the model has a S+L structure. As we will see, such a priori

information can be encoded in the kernel matrix using the maximum entropy principle. Moreover, this kernel matrix is characterized by the decay rate of the predictor impulse responses, by the number of conditional Granger causality relations among the manifest variables, and by the number of latent variables. These features are tuned by the so called hyperparameters vector. It is estimated by minimizing the negative log-likelihood of the measured data. Beside the fact that this problem is nonconvex, the joint estimation of the hyperparameters tuning the sparse and low rank part is not trivial, because these two parts may be nonidentifiable from the measured data. We propose an algorithm to estimate these hyperparameters which imposes and “hyper-regularizer” on the low rank hyperparameter to handle partially this non-uniqueness. Once the kernel matrix is fixed, an unique estimate of the S+L model is guaranteed through regularization.

We warn the reader that the present paper only reports some preliminary result regarding the Bayesian estimation of S+L models. In particular, all the proofs and most of the technical assumptions needed therein are omitted and will be published afterwards.

The outline of the paper follows. In Section II, we introduce the S+L model for multivariate stochastic processes. In Section III, we apply the Gaussian approach to S+L system identification. In Section IV, we derive the kernel matrix by the maximum entropy principle. In Section V, we present the algorithm to estimate the hyperparameters of the kernel matrix. In Section VI, we provide some numerical examples to show the effectiveness of our method. Finally, the conclusions are in Section VII.

### Notation

Throughout the paper, we will use the following notation. denotes the cone of the positive definite symmetric matrices, and its closure. Given and , denotes the -th entry of and denotes the entry of in position . denotes the weighted Frobenius norm of with weight matrix . Given a transfer matrix of dimension , with some abuse of terminology, we say that has rank equal to , with , if it admits the decomposition where and is a transfer matrix. Given a stochastic process , with some abuse of notation, will both denote a random vector and its sample value. Finally,

 y−(t):=[y(t−1)⊤y(t−2)⊤…]⊤ (1)

denotes the past data vector of at time . In similar way, denotes the past data vector of .

## Ii S+L Models

Consider two zero mean stationary Gaussian stochastic processes and of dimension and , respectively. Let be manifest, i.e. it can be measured, and latent, i.e. it cannot be measured. We assume that and are described by the model

 y(t) = Fx(t)+S(z)y(t)+v(t) x(t) = H(z)y(t)+w(t) (2)

where is a BIBO stable transfer matrix of dimension , , is a BIBO stable transfer matrix of dimension , and are, respectively, and dimensional white Gaussian noise (WGN) with zero mean, and covariance matrix and . Moreover, and are independent.

It is possible to describe the structure of model (II) using a directed graph (i.e. network) as in Figure 1, [14]. Each node of this network corresponds to components of or ; edges encode conditional Granger causality relations. More precisely, there is a direct link from node () to node () if and only if () is needed to predict (), conditionally on all the past information available at time , i.e. with and ( and with ). In this case, we shall say that () conditionally Granger causes (), [12]. In Figure 1 we provide an example with and . In particular, conditionally Granger causes all the components in and conversely, and conditionally Granger causes .

Our main modeling assumption in (II) is that manifest variables Granger cause each other mostly through few latent variables. Therefore, we have , i.e. the number of latent variables in is small as compared to the number of manifest variables in ; similarly is sparse, i.e. many of its entries are null transfer functions, so that few manifest variables conditionally Granger causes each other. From (II), we obtain the sparse plus low rank (S+L) model for :

 y(t)=S(z)y(t)+L(z)y(t)+e(t) (3)

where is a sparse transfer matrix by assumption, is a low rank transfer matrix because and are tall matrices, and is WGN with covariance matrix .

Let

be the minimum variance one-step ahead predictor of

based on the observations , and be the minimum variance estimator of based on . From (II), we have

 ^y(t|t−1) = F^x(t|t−1)+S(z)y(t) ^x(t|t−1) = H(z)y(t). (4)

that is, the predictable part of is a function of few estimated latent variables and of the pasts of few manifest variables of . Eq. (II) can be compactly written as

 ^y(t|t−1)=S(z)y(t)+L(z)y(t). (5)

It is worth noting that in the case that , i.e. there is no need of latent variables to characterize the predictor of , we obtain the sparse model presented in [7]. In the case , i.e. the predictor of is completely characterized by the estimators of the latent variables, we obtain a quasi-static factor model where the noise process is white, see for instance [10]. We would like to stress that the decomposition of a transfer matrix into sparse plus low rank may not be unique. As noticed in [5], this degeneracy may occur when is sparse or has few null entries. Although this nonidentifiability issue is important, the aim of this paper is to find one S+L decomposition (see Section III) which is not necessarily unique.

## Iii Gaussian Regression Approach to System Identification

Consider model (II) and assume that the measured data are extracted from a realization of . The latent process cannot be measured nor its dimension is known. In this Section, we address the problem of estimating and from the given data. We draw inspiration from the Gaussian regression approach proposed in [16]. According this method, is generated by model

 y(t)=G(z)y(t)+e(t) (6)

where is a BIBO stable transfer matrix and is WGN with covariance matrix . Note that, if we set then (6) is equivalent to (3). On the other hand, with (6) we loose the S+L structure we are interest in.

Since is BIBO stable, and thus the impulse response coefficients decay to zero as a function of the lag index, it is possible to use the approximation

 G(z)=T∑k=1Gkz−k (7)

where is sufficiently large. The parameters , of the truncated transfer matrix are stacked in the vector which is defined as follows

 θ = [(g[11])⊤…(g[1m])⊤… (8) …(g[m1])⊤…(g[mm])⊤]⊤

where

 g[ij]=[[G1]ij[G2]ij…[GT]ij]⊤ (9)

denotes the impulse response coefficients of the transfer function in position of the truncated approximation of . Then, we stack the measured data in the vector as follows

 y = [y1(T+1)⊤…y1(N)⊤… (10) …ym(T+1)⊤…ym(N)⊤]⊤.

In similar way, we define

 e = [e1(T+1)⊤…e1(N)⊤… (11) …em(T+1)⊤…em(N)⊤]⊤.

From (6

) the vector of the measured data can be expressed in the linear regression form

 y=Φθ+e (12)

where is the regression matrix. Note that, is the one-step ahead predictor of .

According to the Gaussian regression framework, is modeled as a zero mean Gaussian random vector with covariance matrix, or kernel matrix, denoted by .

Let be the posterior mean of given . In [16] it has been proved that, under some technical assumption, can also be written as solution to the Tikhonov regularization problem

 ^θ=argminθ∥y−Φθ∥2Σ−1⊗IN+∥θ∥2K−1. (13)

Moreover, it is not difficult to see that

 ^θ=KΦ⊤(ΦKΦ⊤+Σ⊗IN)−1y. (14)
###### Remark III.1

Using the theory of reproducing kernel Hilbert spaces, [3], it is possible to show that the above results still hold for , [16].

###### Remark III.2

Although we assumed , the Bayes estimator (14) also holds for singular. In that case, Problem (13) is well defined provided that belongs to the range of .

The optimal solution highly depends on the choice of . A typical assumption is that the transfer functions in are independent, that is are independent vectors. We shall also assume that , with , are identically distributed, so that where is the covariance matrix of , with . In this paper, is chosen as a filtered version of the tuned/correlated (TC) kernel, see [16] and [6] for more details. It is important to note this kernel is able to capture high frequency oscillations, which are typical of the predictor impulse responses for low pass processes, and enforces BIBO stability as on the posterior mean of the predictor impulse responses.

### Iii-a Gaussian Regression Approach to S+L Identification

The idea is to model and through Gaussian process, similarly to what has been done above for . In particular, since and are BIBO stable, we can consider their truncated approximations with is sufficiently large. Then, it is not difficult to see that

 y=Φ(θl+θs)+e (15)

where and have been defined in (10) and (11). contains the parameters of the truncated approximations of and , respectively. Then, we model and as zero mean random vectors with covariance matrix and , respectively. Moreover, we shall assume that and are Gaussian and independent. As we will see in Section IV, these assumptions are suggested by the maximum entropy principle.

###### Proposition III.1

Let and be, respectively, the posterior mean of and given . Then, under some technical assumption, and are solution to the Tikhonov regularization problem

 argminθs,θl∥y−Φ(θs+θl)∥2Σ−1⊗IN+∥θs∥2K−1S+∥θl∥2K−1L. (16)

Moreover, we have

 ^θs=KSΦ⊤c,^θl=KLΦ⊤c (17)

where

 c=(Φ(KS+KL)Φ⊤+Σ⊗IN)−1y. (18)

In what follows, and will be referred to as posterior mean of and , respectively. In the next Section, we shall show how and can be chosen so as to enforce BIBO stability on both the posterior mean of and , sparsity on the posterior mean of as well as low rank of the posterior mean of .

## Iv Maximum Entropy Kernel Matrix

In this Section we characterize the prior probability density of

and by using the maximum entropy principle. Such principle states that among all the prior probability densities satisfying certain desired constraints, the optimal one should maximize the differential entropy.

Our starting assumptions are that and are absolutely continuous zero mean random vectors. Let denote the joint probability density of and . Let denote the integration over with respect to the probability measure . Moreover, denotes the space of probability densities which are Lebesgue integrable. The differential entropy of is, [9],

 H(p)=−E[log(p(θs,θl))]. (19)

Next, we characterize the constraint on enforcing BIBO stability and sparsity on the posterior mean of . The transfer function in position of is the null transfer function if and only if is the null vector. We consider the constraint

 E[∥s[ij]∥2~K−1]≤pij, (20)

where . If , then

is zero in mean square and so also is its posterior mean. Moreover, simple algebraic manipulations show that the weighted second moment bound in (

20) implies a bound on the variance of -th element of which decays as the -th element in the main diagonal of . Therefore, condition (20) enforces BIBO stability and sparsity on the posterior mean of .

Regarding the low rank constraint on , let

be the random matrix such that

 Al=[L1L2…LT]. (21)

Consider the constraint

 E[Al(~K−1⊗Im)A⊤l]≤Q. (22)

If has singular values equal to zero, then the posterior mean of has rank less than or equal to . Therefore, the latter admits the decomposition

 Al=[FH1FH2…FHT], (23)

where and , , as in Section II. Equivalently, the posterior mean of admits the decomposition . Similarly to the sparse part, the weight matrix enforces BIBO stability on the posterior mean of .

Consider the following maximum entropy problem

 maxp∈P H(p) (24) s.t. E[∥s[ij]∥2~K−1]≤piji,j=1…m E[Al(~K−1⊗Im)A⊤l]≤Q

where , and .

###### Theorem IV.1

The optimal solution to (24) is such that and are independent, Gaussian with zero mean and covariance matrix

 KS=Γ⊗~K,KL=Λ⊗Im⊗~K (25)

where

 Γ=diag(γ11…γm2)∈S+m2 (26)

and .

The matrices and are the hyperparameters of the kernel matrices and , respectively. Clearly, we are interested in the limiting case where for some and low-rank. Let and . Then, it can be shown that the maximum entropy solution can be extended by continuity to this limiting case where if and only if and if and only if . Thus, tunes sparsity on the posterior mean of and tunes the rank on the posterior mean of . Finally, is worth noting that the hyperparameters tuning the decay rate of the posterior mean of the predictor impulse responses are encoded in , [17].

## V Estimation of the Hyperparameters

In order to compute and we need to estimate the hyperparameters in and the matrices and . The hyperparameters describing are estimated in a preliminary step by minimizing the negative log-likelihood of model (6), see [16]. and are obtained minimizing the negative log-marginal likelihood of . Under some technical assumption, we have, [16],

 ℓ(y,Γ,Λ)=12logdetV+12y⊤V−1y+const.term (27)

where

 V=Φ(KS+KL)Φ⊤+Σ⊗IN. (28)

Since (27) is nonconvex in , only local minima can be computed. Beside that, the joint minimization of and is not trivial because the sparse and low rank part may be nonidentifiable from the measured data. For this reason, we constrain the structure of as follows:

 Λ=α(I−UU⊤)+Udiag(β1…βr)U⊤ (29)

where and its columns are the first singular vectors of an estimate of . In this way, the constraints in are decoupled along the “most reliable” singular vectors of and their orthogonal complement. This is equivalent to fix latent variables (from the estimate ). Regarding the hyperparameter , in [2] it has been shown that the minimization of (27) leads sparsity in the main diagonal of . Therefore, we minimize (27) with respect to while and are fixed. The complete procedure to estimate , and is described in Algorithm 1. , , and denote, respectively, , , and at the -th iteration. Finally, to minimize efficiently (27) with respect to we used the scaled gradient projection algorithm developed in [4].

## Vi Simulation results

We consider three Monte Carlo studies of runs where at any run a model with manifest variables is randomly generated. For each run in the Monte Carlo experiments an identification data set and a test set, both of size , are generated. The noise covariance matrix is always estimated via a preliminary step using a low-bias ARX-model, see [11].

In the first experiment, the models have McMillan degree equal to , and are perturbed versions of (3) with latent variable and four non null transfer functions in .

The second experiment is identical to the first one, with the exception that the latent variables are .

In the third experiment, the models have McMillan degree equal to , but without a special structure.

We compare the following one-step ahead predictors:

• TRUE: this is the one computed from the true model

• PEM: this is the one computed from the PEM approach, as implemented in pem.m function of the MATLAB System Identification Toolbox

• TC: this is the one computed with the approach described at the beginning of Section III

• SL: this is our method described in Section III-A.

The following performance indexes are considered:

• average relative complexity of the S+L network of the SL model (in percentage)

 AC=1005050∑t=1#SLkm2T (30)

where is the number of parameters of the estimated S+L model at the -th run, whereas is the number of parameters of a nonstructured model

• one-step ahead coefficient of determination (in percentage)

 COD= 100⎛⎝1−1500∑500t=1∥ytest(t)−^ytest(t|t−1)∥21500∑500t=1∥ytest(t)−¯ytest∥2⎞⎠

where denotes the sample mean of the test set data and is the one-step ahead prediction computed using the estimated model

• average impulse response fit (in percentage)

 AIRF=100(1−∑50k=1∥Gk−^Gk∥2∑50k=1∥Gk−¯G∥2) (31)

with .

Table I

shows the percentage of the average relative complexity of the S+L network. In particular, in the first two experiments our method is able to detect that the underlying model is close to have a simple S+L network. Figure 2

shows the COD in the first two experiments. One can see that SL provides a slightly better performance than TC. On the other hand, SL provides better estimators for the predictor coefficients than the TC, Figure 3.

Finally, Figure 4

shows the COD in the third experiment. The median of SL is slightly worse than the one of TC. On the other hand, the bottom whisker of SL is better than the one of TC. Indeed, SL simplified the S+L network, see Table I, increasing the robustness of the estimated predictor impulse response coefficients.

## Vii Conclusions

In this paper, we proposed a Gaussian regression approach to identify multivariate stochastic processes having sparse network with few latent nodes. Simulations show that our approach is able to identify a S+L network which does not compromise the prediction performance.

## References

• [1] A. Abdelwahab, O. Amor, and T. Abdelwahed. The analysis of the interdependence structure in international financial markets by graphical models. Int. Res. J. Finance Econ., 15:291–306, 2008.
• [2] A. Aravkin, J. Burke, A. Chiuso, and G. Pillonetto. Convex vs non-convex estimators for regression and sparse estimation: the mean squared error properties of ard and glasso.

Journal of Machine Learning Research

, 15:217–252, 2014.
• [3] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950.
• [4] S. Bonettini, A. Chiuso, and M. Prato. A scaled gradient projection method for Bayesian learning in dynamical systems. SIAM Journal on Scientific Computing, 37:1297–1318, 2015.
• [5] V. Chandrasekaran, P. Parrilo, and A. Willsky. Latent variable graphical model selection via convex optimization. Annals of Statistics (with discussion), 40(4):1935–2013, Apr. 2010.
• [6] T. Chen, H. Ohlsson, and L. Ljung. On the estimation of transfer functions, regularizations and gaussian processes-revisited. Automatica, 48(8):1525–1535, 2012.
• [7] A. Chiuso and G. Pillonetto. A Bayesian approach to sparse dynamic network identification. Automatica, 48(8):1553–1565, 2012.
• [8] M. Choi, V. Chandrasekaran, and A. Willsky. Gaussian multiresolution models: Exploiting sparse markov and covariance structure. IEEE Transactions on Signal Processing, 58(3):1012–1024, March 2010.
• [9] T. Cover and J. Thomas. Information Theory. Wiley, New York, 1991.
• [10] M. Deistler and C. Zinner. Modelling high-dimensional time series by generalized linear dynamic factor models: An introductory survey. Communications in Information & Systems, 7(2):153–166, 2007.
• [11] G. Goodwin, M. Gevers, and B. Ninness. Quantifying the error in estimated transfer functions with application to model order selection. IEEE Transactions on Automatic Control, 37(7):913–928, Jul 1992.
• [12] C. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424–438, Aug. 1969.
• [13] E Kolaczyk. Statistical analysis of network data: methods and models. Springer, 2009.
• [14] S. Lauritzen. Graphical Models. Oxford University Press, Oxford, 1996.
• [15] L. Ljung. System Identification (2Nd Ed.): Theory for the User. Prentice Hall, New Jersey, 1999.
• [16] G. Pillonetto, A. Chiuso, and G. De Nicolao. Prediction error identification of linear systems: A nonparametric gaussian regression approach. Automatica, 47(2):291–305, 2011.
• [17] G. Pillonetto and G. De Nicolao. A new kernel-based approach for linear system identification. Automatica, 46:81–93, 2010.
• [18] G. Pillonetto, F. Dinuzzo, T. Chen, G. De Nicolao, and L. Ljung. Kernel methods in system identification, machine learning and function estimation: A survey. Automatica, 50(3):657–682, 2014.
• [19] T. Söderström and P. Stoica, editors. System Identification. Prentice Hall, New Jersey, 1988.
• [20] A. Werhli, M. Grzegorczyk, and D. Husmeier. Comparative evaluation of reverse engineering gene regulatory networks with relevance networks, graphical gaussian models and Bayesian networks. Bioinformatics, 22(20):2523–2531, 2006.
• [21] M. Zorzi and R. Sepulchre. AR identification of latent-variable graphical models. Conditionally accepted in IEEE Transactions on Automatic Control, 2015.