I Introduction
As time series measurements become increasingly commonplace in many problems, developing algorithms that can learn the underlying structure and the relationships between the observed variables become necessary. An important class of such algorithms focuses on extracting the linear dependency of the observed variables; this line of work originated from the pioneering work [1] proposing Granger causality. The linear temporal models have been used in numerous fields such as financial forecasting [2], biological network modeling [3], and traditional control systems [4], because they are simple enough to learn with limited data
and yet are effective in practice to model time series data. In these problems, the first step is to learn the model parameters, and then further questions about the system can be investigated, including prediction of future values, imputation of missing variables and causal inference.
In many of the realworld datasets, some of the variables may be unobserved; most of the times, even the existence of such unobserved variables may be unknown. Therefore it is expedient to consider models which allow for some hidden or latent variables and extract relationships not only between the observed variables but also between the latent variables. Inference in the presence of models with hidden variables has a long and distinguished history; a particular breakthrough is the work of [5]
which proposed the ExpectationMaximization (EM) algorithm for maximizing the likelihood of observations in presence of latent variables. EMbased algorithms however do not guarantee convergence to the global optima of the likelihood landscape.
In this paper, we unite the two threads by considering the learning of linear temporal relationships with latent variables. Our main contributions are the following.

We propose a new linear model for incorporating temporal latent variables, which captures the effects of the temporal memory in the system. Our proposed model has two important features:

For each observed variable, there is a latent component that acts as its memory,

Each observed variable is affected by its memory with a random and time varying delay.


We provide the identifiability results for learning the system parameters using the observations.

We propose an efficient algorithm to learn the system parameters. We demonstrate that the proposed algorithm outperforms the baseline method for linear temporal models both in synthetic and realworld datasets.
To illustrate the first aspect of our proposed model, consider an example where the variables are the disease states of various individuals over time, and we are interested in learning how the disease propagates through the network. In this context, it is unlikely that an individual who contacts a diseased individual immediately receives the disease; rather it may increase the susceptibility that can later manifest as the disease. Furthermore, susceptible individuals interact and influence each other during this incubation time period. In this case, susceptibility is an unobserved latent state, that can encode temporal memory inherent to the system. We refer the reader to Figure 1 for an illustration of the model in a system with four observed timeseries.
A second aspect of our model is that we allow the observed state to depend on a random lag unobserved susceptibility, where the value of the random lag follows a certain distribution; this further enhances the temporal memory. The proposed model is also useful when the intersampling times are varying, unprecise, or unknown.
The rest of this paper is organized as follows. In Section II, we review the relevant literature. In Section III, we propose our model with latent variables and random lags. In Section IV, we derive linear equations, composed of covariances of observations and the model parameters. In Section V, we first present a theorem regarding the identifiability of the model parameters, and then propose an algorithm for learning the model parameters. Section VI provides the experimental results with synthetic data as well as realworld data, and Section VII concludes the paper.
Ii Related Work
Sparse recovery from time series data have been developed through a long line of pioneering works (e.g., [6, 7, 8]), which are extensively used for Gaussian graphical model selection [9, 10, 11, 12, 13, 14]. A common theme of these works is that all variables involved in the model are observed. In contrast, if there are latent variables that cannot be directly observed, a naive model without considering them may result in a dense interaction graph with many spurious edges between the variables [15]. The most common approach to this problem is developing methods based on the Expectation Maximization (EM) [5, 16]; however, because of nonconvexity, EM suffers from the fact that it can get stuck in local optima and that it comes with weak theoretical guarantees. In [17], the authors proposed an alternative convex optimization approach to Gaussian graphical model selection with latent factors, assuming that i.i.d. samples are drawn from the distribution of observed variables, and that, compared to the number of observed variables, there are very few number of latent variables.
A somewhat parallel line of research on time series data is to identify causal relationships between the variables. One of the earliest methods, also perhaps the most wellknown, is the Granger causality [1]. Formally, Granger causes if its past value can help to predict the future value of beyond what could have been done with the past value of only. Granger causality has been widely employed, due to its simplicity, robustness, and extendability, and many variants have been proposed for different application [18, 19, 20, 21, 22]. In [22], the authors applied Lasso regularization for graphical Granger model selection, and showed that it performs well in terms of accuracy and scalability. However, similar to the nontemporal models, when there are unseen/unknown time series, the simple temporal model can lead to wrong causal inference [23].
In this paper, we study a model that combines both features of sparse latent variable recovery and temporal causality. Namely, we consider the problem of parameter estimation of a linear dynamical system with random delays between the latent variables and observed variables. The proposed model is a generalization of the wellstudied temporal Gaussian graphical model with hidden variables [24]). In [25], the state space model is generalized by including inputs. In [26], this model is revised in a sense that the previous observations are fed back into the model as inputs. In [27]
, the authors considered a first order vector autoregressive (VAR) model with hidden components, assuming that the number of hidden variables is
much fewer than the number observations, the connections between observed components are sparse and each latent variable interacts with many observed components. The dependency structure is then learned by decomposing a matrix as a sum of low rank and sparse matrices. These models however cannot handle our problem, as we consider a setting where the number of hidden variables is equal to the number of observed variables.Recently, in [28]
, the authors considered a first order VAR model, which can capture number of latent variables up to the number of observed variables. Under this model, conditions such as nonGaussianity and independent noise are utilized to learn the parameters; this is in spirit similar to Independent Component Analysis
[29]. Our model is particularly tailored to the case when each observation has a corresponding latent component. The proposed algorithm for learning the model parameters utilizes the special structure of the model and does not need nonGaussianity or independence to obtain consistent estimates. Furthermore, our model can incorporate more intricate memory due to random lags, whereas [28] is limited to the first order Markov processes. We demonstrate the practical utility of our model by performing prediction on financial time series and climate datasets.Iii Model and Problem Setup
We consider a discrete time linear dynamical system where there are two types of variables: observed and latent. The unique feature of the proposed model is that the latent states influence the observed variables with random delays. Let denote the vector of latent state variables and be the vector of observed variables. The system dynamics is described as
(1) 
The random vectors and represent the state and observation noises, respectively, which are independent of each other and of the values of and
. Both of these noise sources are temporally white (uncorrelated from time step to time step) and spatially multivariate normally distributed with zero mean and covariance matrices denoted by
and , respectively. The matrices , , and are of size . The vector represents the delays incurred at the corresponding coordinates at time . Each element ofis an integervalued random variable, which is independent and identically distributed in
according to some distribution, and is independent of everything else in the system. We denote the probability mass function of
by , where .Let denote the dimension of the vector . The model in (1) extends some existing models in literature. For example, by setting and assuming , the gene expression models introduced in [30, 26] can be recovered. The case where and is considered by, for example, [27]. In [28], a setting where and is studied. In this paper, we consider the setting where each observed variable has a corresponding latent state, i.e., , and that each observed variable has a timevarying delay with respect to its latent state, implying that the matrix is diagonal (and invertible).
With
being diagonal, without loss of generality, we can further restrict it to be the identity matrix. To see this, let
, where is a nonsingular matrix. The model in (1) can be written as:(2) 
The matrix is required to be diagonal, because needs to be diagonal according to the model in (1). Note that the diagonality of enables us to write (2), since we have . Moreover, similar to , is multivariate normally distributed (although its covariance might be different, for which we do not have any requirement on). As a result, any coordinate transformation of latent variables as , with being a nonsingular diagonal matrix, will generate observations identically distributed with those of (1). Thus, without loss of generality, we can take to be , and the dynamical system to be
(3) 
Iv System Equations
In this section, we derive equations in terms of covariances of the observations and system parameters.
For simplicity, we assume that the system is initialized at origin, i.e., and are zero vectors. Thus, according to the model in. (3), and are multivariate normally distributed with zero mean. The covariance of is defined as , which reduces to , assuming that the variables are zero mean. Let denote the crosscovariances of and . Note that due to stationarity, we have for any integer . For notational convenience, we write and . Let and be vectors. We define the th element of as .
Since is i.i.d., we drop the subscript of when it does not lead to confusion. Let us denote , where and are vectors of time series and is as defined in Equ. 1. Note that is not random. We use the following two Lemmas for deriving the equations and in proofs.
Lemma 1.
Proof.
∎
Let and , where is a matrix. Note that , since each coordinate of is shifted independently at random, and therefore, the matrix applies to elements of in different times. The following Lemma shows that we still have .
Lemma 2.
Let , and be defined as above. We have .
Proof.
Let us denote . We have
where and denote the th row of and , respectively. Therefore, we have . ∎
The following Lemma forms the basis of how we learn the parameters.
Lemma 3.
(4) 
Proof.
We have
(5)  
(6) 
Equation 5 is obtained by multiplying to the second equation of the model (3) and taking the expectation. Since is independent of , we have . Equation 6 is also obtained by first multiplying to the second equation of the model and taking the expectation, and then replacing from the first equation, using Lemma 2. To derive equations which are independent of the structure of , we need to find such that is zero. We have
Recall that is independent of and for . Thus, for the first term to be zero, we must have . Since lies in and is i.i.d., we must have . For the second term to be zero, we must have and, thus, . Therefore, for , . By replacing in Equ. 6 from Equ. 5, we obtain Equ. 4. ∎
V Algorithm and Main Results
Our goal is to learn system matrices , and from a sequence of observations , where is the length of the time series. In the following, we present the identifiability results and propose an algorithm for recovering the models parameters in a specific setting.
Va Identifiability of Model Parameters
We first define the notion of generic and identifiable parameters.
Definition 1.
A collection of parameters is said to be generic if each of the scalar parameters in the collection are chosen independently from a continuous distribution with a probability density. More formally, a collection of random variables is said to be generic if the probability measure of the collection is the product of probability measures on each scalar parameter and each probability measure is absolutely continuous with respect to the Lebesgue measure.
Definition 2 ([30]).
Consider an observable random variable defined having probability distribution
, where the parameter space is an open subset of the multidimensional Euclidean space. We say that this model is identifiable if the family has the property that if and only if . In this parametric setting, we say that the parameter vector is identifiable.Identifiability property is important, for without it, it would be possible for different values of the parameters to give rise to identically distributed observables, making the statistical problem of estimating illposed [30].
We assume that all the system parameters including matrices , , , , , and the pmf are generic. Since there are as many latent variables are observed variables, not all entries of , and are learnable even as grows to infinity. The following theorem states the identifiability of the system parameters in (3) as the number of samples grows.
Theorem 1.
Consider a dynamical system as defined in (3) and let . The following statements hold for all generic parameters of , , , , and .

: We can identify and . Note that for , we have and .

: We can identify , , and the matrix up to a scalar multiple.
Proof.
Proof is provided in Appendix A. ∎
The following corollary is implied from Theorem 1.
Corollary 1.
For all , if only one of the matrices , , or is known to be nondiagonal, then the nondiagonal entries of that matrix can be determined as .
VB Learning Algorithm
Theorem 1 states that some combinations of system matrices are identifiable as the sample size grows to infinity, but does not provide an algorithm to estimate these matrices under finite sample regimes. In this section, we consider a specific setting and develop a consistent algorithm based on a convex optimization framework that recovers the true system parameters.
In particular, we are interested in the setting where each latent variable interacts with only a few other latent variables, in addition to its corresponding observed variable. To isolate the interactions between latent variables, we assume that each observed variable has a causal relationship with only itself. Thus, the matrix is sparse and matrices and are diagonal. Moreover, we assume that the matrix has positive diagonals, implying that each variable positively affects itself. Matrix also restricted to nonnegative (possibly zero) diagonals. In this setting, we are interested in learning the signsparsity pattern of , since it captures the underlying causal graph of the process. Through simulation studies with realworld data, we demonstrate the above setting is powerful enough to capture many practical phenomena.
Let , and . Note that with being sparse, and and diagonal, we have that matrix is diagonal and matrices and are sparse, with the same signsparsity pattern as . We can write Equ. 4 as follows.
(7) 
Equation 7 provides a system of linear equations, from which we can obtain and and hence the signsparsity pattern of . Algorithm 1 gives the detailed description of learning the signsparsity pattern of , and Lemma 4 states its consistency. Note that in algorithm, we return the signsparsity pattern of , because the norm of is larger than the norm of , and, as a result, its values are more reliable.
In algorithm 1, we use the sample covariance of the process , calculated as . The norm–the sum of the absolute values of elements–of the matrix is denoted by , and used as a convex relaxation of the norm–the number of nonzero elements. Also, the Frobenius norm of a matrix, denoted by , is the square root of the sum of the squares of elements of the matrix.
Lemma 4.
Proof.
Algorithm 1 finds the variables that minimize the error in the linear system of equations 7. Since we have the exact estimate of the true covariances for , the objective function is at its minimum (zero) with arguments derived from the system parameters. Under the algorithm setting, is sparse, and and are diagonal (with nonnegative diagonals). Thus, according to Corollary 1, the signsparsity pattern of is identifiable as , and hence the algorithm outputs the true signsparsity pattern of . ∎
Vi Experimental Results
In this section, we study the performance of our method for three types of data: synthetic data, stock return data, and climate data. In all three cases, we compare the performance of Alg. 1 with the baseline Granger Lasso method [22]. A general form of the Granger Lasso method can be written as follows.
where denotes the norm–the square root of the sum of the squares of elements–of the vector, and for , are matrices and are the penalty parameters. The dependency matrix is then obtained as .
For synthetic date, we also compare our method with a method [27] proposed for first order vector autoregressive (VAR) model with hidden variables:
(8) 
In this model, it is assumed that , and is sparse and captures the underlying interactions among the variables. The dependency structure is learned by decomposing a matrix as a sum of low rank and sparse matrices, as follows.
where
is the nuclear norm, i.e. sum of singular values, a convex surrogate for lowrank.
Via Synthetic Data
The synthetic datasets are generated according to the model in (3) to study the performance of the Algorithm 1 in recovering the true underlying temporal dependency graph of the latent variables. For matrix , we generate a sparse matrix, where the sign of the nonzero elements are randomly assigned and the absolute value of the nonzero elements are generated uniformly at random. The diagonal elements of
are also generated randomly according to a uniform distribution, and we set
, which results in . The matrices are properly scaled to ensure that the system is stable. Algorithm 1 thus solves the following system of equationswhere and . Note that since is diagonal and its diagonal elements are positive, the signsparsity pattern of and are the same.
For each experiment, we generate random datasets and report the average performance on them. Figure 2 shows the comparisons. For the first experiment, we consider number of variables, an average of five nonzero elements per each row of , and . The results are reported for two different number of samples and . For the second experiment, we consider number of variables, an average of two nonzero elements per each row of , and . The results are reported for two different values of and for .
In both experiments, we varied the sparsity threshold and reported the ROC curve which is the true positive rate versus the false positive rate of the signsparsity pattern of . Generally, with more number of samples and smaller maximum delay, we obtain a better estimation of the signsparsity pattern of . Notably, our method significantly outperforms the Granger Lasso method and the method for first order VAR model with hidden variables [27], especially with small false positive rates. The results also indicate that the models in which cannot capture what our proposed dynamical system is able to model.
Results on synthetic data for our proposed method, the Granger Lasso method and the vector autoregressive model with hidden variables
8. (a) The results for , on average five nonzero elements per each row of , and different number of samples. (a) The results for , on average two nonzero elements per each row of , and different values for .ViB Stock Market Data
We take the endoftheday closing stock prices of companies for years in the period from the beginning of to the end of (roughly samples per year). The companies are as follows: Apple, HP, IBM, Microsoft, Xerox, AT&T, Verizon, Comcast, Oracle, Target, Walmart, Bank of America, Regions Financial, U.S. Bank, Wells Fargo, American Airlines, Caterpillar, Honeywell, International Paper and Weyerhaeuser. The data is collected from Google Finance website [31]
. The data are normalized such that each variable has zero mean and unit variance.
Since the ground truth is not known, we follow the standard procedure of using prediction error to evaluate the algorithms. We train the models for the data from the first years and do the prediction for the final year. At each time, prediction is done for the th sample in future, where . For example, means that the stock price at the end of the day for the third day in the future is forecasted. We use fold crossvalidation to tune the parameters.
For simplicity and due to limited training data, we set , and obtain from cross validation. Cross validations result in and for our method and the maximum delay of for the Granger Lasso method. ^{1}^{1}1Note that in our model corresponds to in Granger Lasso model, since in our model 3 with , depends on . The results for the maximum delay is sensible, since changes tend to be incorporated into stock prices relatively quickly. We suspect more delays would be used if we consider prices at finer granularities (e.g. hourly or minute level). Figure 3 shows the normalized mean square error for prediction results. It can be seen that our method outperforms the Granger Lasso method, especially for predicting farther samples in future.
ViC Climate Data
The Comprehensive Climate Dataset (CCDS) [32] is a collection of climate records of North America from [33]. It contains monthly observations of climate variables spanning from to
. The observations were interpolated on a
degree grid with observation locations. The variables include CarbonDioxide, Methane, CarbonMonoxide, Hydrogen, Wet Days, Cloud Cover, Vapor, Precipitation, Frost Days, Temperature, Temperature Range, Minimum Temperature, Maximum Temperature, Solar Radiation (Global Horizontal, Direct Normal, Global Extraterrestrial, Direct Extraterrestrial), and Ultraviolet (UV) radiation. In the following analysis, we omitted the UV variable because of significant missing data.Similar to the stock market data, since the ground truth is not known, we use prediction accuracy as a measure of how well we can learn the underlying model. We train the model on data from the first years and do the prediction for the final three years. At each time, the prediction is done for the value of variables for the next month. The data are normalized across the variables and for different locations to have zero mean and unit variance. We use fold crossvalidation to tune the parameters.
We set in our model, where is also obtained from cross validation. Figure 4 shows the normalized mean square error for different values of the maximum delay . Note that the maximum delay corresponds to in our model. It can be seen that our method outperforms the Granger Lasso method in terms of the prediction accuracy. Also, for our method, results in the smallest error, while for the Granger Lasso method, produces the best results. Thus, unlike the Granger model, our model suggests that the climate variables may depend on the value of other variables from several months ago, which is in accordance with the results of [33].
Moreover, notice that the matrix captures the intersampling correlation and allows the latent variables to model the change occurred at time . For example, in stock market data, the daily values are highly correlated, and we obtain ; whereas for climate data, we obtain , which indicates that the monthly values of variables significantly change.
Vii Conclusion
In this paper, we presented a method for learning the underlying dependency graph of time series, using a new model for temporal latent variables. We proposed an algorithm for learning the parameters and demonstrated its consistency. This study opens up several possible directions for future study.

While we demonstrated consistency, more analysis is required in order to obtain sample complexity bounds for learning the structure and parameters of the model. Similarly, optimal predictors and missing value imputation algorithms need to be studied under this new model, which is especially interesting for the case when is large.

We focused on temporal latent variables that simplify and capture temporal interactions more succinctly, while several existing works have focused on spatial latent variables, that mainly capture interactions between multiple variables [27, 28]. An interesting direction of future study is to combine these two models to obtain a single general model that can capture both.

While we focused on the realvalued timeseries model, binary and categorical time series are of equal practical interest. Developing algorithms under simple canonical models for these settings is of immediate interest.

In this paper, we have worked on linear models; however, the memory of stable
linear dynamical systems decays exponentially, with or without latent variables. While the latent variables can encode longer term linear interactions more succinctly, the model still suffers from the eventual memory decay. This is a problem well recognized in machine learning, and a class of algorithms based on Long ShortTerm Memory (LSTM)
[34] have demonstrated remarkable empirical performance. However, they lack theoretical backing, and developing this line of work to accommodate nondecaying memory can be an interesting direction of future work.
References
 [1] C. W. Granger, “Investigating causal relations by econometric models and crossspectral methods,” Econometrica: Journal of the Econometric Society, pp. 424–438, 1969.
 [2] J. H. Cochrane, “Time series for macroeconomics and finance,” Manuscript, University of Chicago, 2005.
 [3] N. D. Lawrence, Learning and inference in computational systems biology. MIT press, 2010.
 [4] P. C. Young, Recursive estimation and timeseries analysis: an introduction. Springer Science & Business Media, 2012.
 [5] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of the royal statistical society. Series B (methodological), pp. 1–38, 1977.
 [6] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 267–288, 1996.
 [7] D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1289–1306, 2006.
 [8] M. J. Wainwright, “Sharp thresholds for highdimensional and noisy sparsity recovery usingconstrained quadratic programming (lasso),” Information Theory, IEEE Transactions on, vol. 55, no. 5, pp. 2183–2202, 2009.
 [9] N. Meinshausen and P. Bühlmann, “Highdimensional graphs and variable selection with the lasso,” The annals of statistics, pp. 1436–1462, 2006.
 [10] P. Ravikumar, M. J. Wainwright, G. Raskutti, B. Yu, et al., “Highdimensional covariance estimation by minimizing l1penalized logdeterminant divergence,” Electronic Journal of Statistics, vol. 5, pp. 935–980, 2011.
 [11] J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–441, 2008.
 [12] M. Yuan and Y. Lin, “Model selection and estimation in the gaussian graphical model,” Biometrika, vol. 94, no. 1, pp. 19–35, 2007.
 [13] O. Banerjee, L. El Ghaoui, and A. d’Aspremont, “Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data,” The Journal of Machine Learning Research, vol. 9, pp. 485–516, 2008.
 [14] P. Danaher, P. Wang, and D. M. Witten, “The joint graphical lasso for inverse covariance estimation across multiple classes,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 76, no. 2, pp. 373–397, 2014.
 [15] J. C. Loehlin, Latent variable models: An introduction to factor, path, and structural analysis . Lawrence Erlbaum Associates Publishers, 1998.
 [16] R. A. Redner and H. F. Walker, “Mixture densities, maximum likelihood and the em algorithm,” SIAM review, vol. 26, no. 2, pp. 195–239, 1984.
 [17] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky, “Latent variable graphical model selection via convex optimization,” in Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pp. 1610–1613, IEEE, 2010.
 [18] A. B. Barrett, L. Barnett, and A. K. Seth, “Multivariate granger causality and generalized variance,” Physical Review E, vol. 81, no. 4, p. 041907, 2010.
 [19] C. Hiemstra and J. D. Jones, “Testing for linear and nonlinear granger causality in the stock pricevolume relation,” The Journal of Finance, vol. 49, no. 5, pp. 1639–1664, 1994.
 [20] A. Brovelli, M. Ding, A. Ledberg, Y. Chen, R. Nakamura, and S. L. Bressler, “Beta oscillations in a largescale sensorimotor cortical network: directional influences revealed by granger causality,” Proceedings of the National Academy of Sciences of the United States of America, vol. 101, no. 26, pp. 9849–9854, 2004.
 [21] D. Marinazzo, M. Pellicoro, and S. Stramaglia, “Kernelgranger causality and the analysis of dynamical networks,” Physical review E, vol. 77, no. 5, p. 056215, 2008.
 [22] A. Arnold, Y. Liu, and N. Abe, “Temporal causal modeling with graphical granger methods,” in Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 66–75, ACM, 2007.
 [23] P. Spirtes, C. N. Glymour, and R. Scheines, Causation, prediction, and search. MIT press, 2000.

[24]
Z. Ghahramani, “An introduction to hidden markov models and bayesian networks,”
International Journal of Pattern Recognition and Artificial Intelligence
, vol. 15, no. 01, pp. 9–42, 2001.  [25] Y. Bengio and P. Frasconi, “Inputoutput hmms for sequence processing,” Neural Networks, IEEE Transactions on, vol. 7, no. 5, pp. 1231–1249, 1996.
 [26] M. J. Beal, F. Falciani, Z. Ghahramani, C. Rangel, and D. L. Wild, “A bayesian approach to reconstructing genetic regulatory networks with hidden factors,” Bioinformatics, vol. 21, no. 3, pp. 349–356, 2005.
 [27] A. Jalali and S. Sanghavi, “Learning the dependence graph of time series with latent factors,” arXiv preprint arXiv:1106.1887, 2011.
 [28] P. Geiger, K. Zhang, B. Schoelkopf, M. Gong, and D. Janzing, “Causal inference by identification of vector autoregressive processes with hidden components,” in Proceedings of the 32nd International Conference on Machine Learning (ICML15), pp. 1917–1925, 2015.
 [29] J. Eriksson and V. Koivunen, “Identifiability, separability, and uniqueness of linear ica models,” Signal Processing Letters, IEEE, vol. 11, no. 7, pp. 601–604, 2004.
 [30] C. Rangel, J. Angus, Z. Ghahramani, and D. L. Wild, “Modeling genetic regulatory networks using gene expression profiling and statespace models,” in Probabilistic Modeling in Bioinformatics and Medical Informatics, pp. 269–293, Springer, 2005.
 [31] “https://www.google.com/finance.,”
 [32] “http://wwwbcf.usc.edu/ liu32/data/NA19902002Monthly.csv.,”
 [33] A. C. Lozano, H. Li, A. NiculescuMizil, Y. Liu, C. Perlich, J. Hosking, and N. Abe, “Spatialtemporal causal modeling for climate change attribution,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 587–596, ACM, 2009.
 [34] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
 [35] K. B. Petersen et al., “The matrix cookbook,”
Appendix A Proofs of Theorem and Lemmas
We use the following Lemma for our analysis.
Lemma 5.
The set of zeros of any nonzero multivarite polynomial of degree has Lebesgue measure zero on .[28]
We also use the equality [35], where is the vectoroperator which stacks the columns of a matrix into a vector, is the Kronecker product, and , and are square matrices with the same size. We denote by .
In the following, we prove Theorem 1.
Lemma 6.
For every , each element of is a multivariate polynomial in the elements of , , , and the pmf .
Proof.
Using the equations of the model (3), we can write
(9)  
(10) 
In Equations 9 and 10, the subscripts of the righthand sides are greater then the subscripts of the lefthand sides. Moreover, note that, for , we have ; therefore, the subscripts are never positive. As a result, for every , by recursive use of Equations 9 and 10, each element of can be returned as a multivariate polynomial in the elements of , , , , , and the pmf . Given that and , we conclude that for every , each element of is a multivariate polynomial in the elements of , , , and the pmf . ∎
Lemma 7.
For every , each element of can be expressed as a rational function, where both the nominator and the denominator are multivariate polynomials in elements of matrices and , , , and pmf .
Proof.
We have the following equations
(11) 
where the first four equations are derived from the first equation of the model (3) and the last three equations are derived from the second equation of the model. Note that, for every , . By converting the equations in 11 to vector from, we have
(12) 
According to Lemma 6, each element of can be returned as a multivariate polynomial in the elements of the system parameters. Also, note that since , is a permutation of . Therefore, Equ. 12 forms a linear system of equations with number of equations and the same number of variables in elements of and . As a result, each element of can be solved as a rational function, where both the nominator and the denominator are multivariate polynomials in elements of system parameters. Using Equ. 4, can be also expressed as rational functions with both the nominator and denominator as multivariate polynomials. Therefore, the proof is complete. ∎
Let be a block Toeplitz matrix as follows.
(13) 
We consider two cases of and .
Case 1:
In this case, Equ. 7 can be written as , and therefore
(14) 
Note that for , we have and . Writing Equ. 14 For , we get
(15) 
Lemma 8.
Proof.
Let , , and and . Using the model equations, we obtain , , and . By setting and , the corresponding instance of is fullrank. ∎
Theorem 2.
Proof.
According to Lemma 7, each element of can be written as a rational function, where both the nominator and the denominator are multivariate polynomials in elements of the system parameters , , , , and . Therefore, we can conclude that , where and are multivariate polynomials in elements of the system parameters. According to Lemma 8, these multivarite polynomials are nonzero, because there exists an instance which results in a nonzero value for . Since the Lebesgue measure of roots of a nonzero multivariate polynomial is zero, we conclude that is nonzero for generic parameters. ∎
Proof.
According to Equ. 14, can be written in terms of and . Therefore, for , the first row of is a linear function of other rows, and, as a result, the matrix is not fullrank. ∎
Case 2:
Lemma 10.
Proof.
Let , , , , and be a uniform distribution in . Using the model equations (3), we get
(18) 
Since variables are independent with the same distribution, all matrices are multiples of identity. Therefore, we can write , where is the corresponding matrix for one variable, as follows
where denotes the variance. Since , we only need to show that matrix is fullrank. Using Equ. 18, we can write
(19) 
where . By solving the system of equations in 19, we obtain
Also, using the following equation

we solve , and, since , we have . Now, we compute the determinant of by doing the matrix column operation. More specifically, we replace with where is the th column of . The new matrix is an upper triangular matrix, as follows

Since all diagonals are nonzero, the determinant of is nonzero, and, as a result, matrix and also are fullrank. ∎
Theorem 3.
Proof.
Proof is similar to the proof of Theorem 2. ∎
Proof.
According to Equ. 7, can be written in terms of . Therefore, for , the first row of is a linear function of other rows, and, as a result, the matrix is not fullrank. ∎
For , according to Theorem 2, is fullrank for generic parameters, and according to Lemma 9, is not fullrank for . Also, for , according to Theorem 3, is fullrank for generic parameters, and according to Lemma 11, is not fullrank for . Therefore, we have the following Corollary.
Corollary 2.
Let be as defined in Equ. 13. The maximum for which is fullrank is .
Now, we are ready to prove Theorem 1. Recall that and . We have the following cases.
Comments
There are no comments yet.