Separable correlation and maximum likelihood

05/01/2018
by   Karl Oskar Ekvall, et al.
University of Minnesota
0

We consider estimation of the covariance matrix of a multivariate normal distribution when the correlation matrix is separable in the sense that it factors as a Kronecker product of two smaller matrices. A computationally convenient coordinate descent-type algorithm is developed for maximum likelihood estimation. Simulations indicate our method often gives smaller estimation error than some common alternatives when correlation is separable, and that correctly sized tests for correlation separability can be obtained using a parametric bootstrap. Using dissolved oxygen data from the Upper Mississippi River, we illustrate how our model can lead to interesting scientific findings that may be missed when using competing models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

12/07/2020

Likelihood Geometry of Correlation Models

Correlation matrices are standardized covariance matrices. They form an ...
08/18/2021

A New Asymmetric Copula with Reversible Correlations and Its Application to the EU Sovereign Debt Crisis

This paper proposes a novel asymmetric copula based upon bivariate split...
06/03/2019

Unconstrained representation of orthogonal matrices with application to common principle components

Many statistical problems involve the estimation of a (d× d) orthogonal ...
03/28/2019

Regression-based Network Reconstruction with Nodal and Dyadic Covariates and Random Effects

Network (or matrix) reconstruction is a general problem which occurs if ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many statistical applications require the estimation of a covariance matrix , the set of symmetric, positive definite matrices, for some

-valued random vectors

). When the number of observations is small, meaning if there are no other parameters to be estimated, and the vectors are multivariate normal, a maximum likelihood estimate of does not exist. Two common ways to estimate in such settings are shrinkage estimators, see e.g. Fan et al. (2016), and more parsimonious parameterizations, which may also result from application specific considerations. In the spatio-temporal literature it is sometimes assumed that the covariance is separable (Cressie and Wikle, 2011, section 6.1.3) in the sense that factors as a Kronecker product of two smaller covariance matrices. In this article we consider estimation of when the corresponding correlation matrix () is separable, but need not be. Mathematically, for some and , where is the set of definite correlation matrices, denotes the Kronecker product, and . As we shall see, this is a weaker assumption than covariance separability that still, often substantially, reduces the number of parameters compared to an unrestricted covariance matrix.

Correlation separability has a natural interpretation for two-dimensional, or matrix-valued, data. If

is a random matrix such that

, where is the vectorization operator stacking the columns of its matrix argument, then every row of has the same correlation matrix , and every column of has the same correlation matrix . In our example with spatio-temporal data, rows of correspond to different sampling locations, or areas, and the columns to different points in time. Then separability of says that the temporal autocorrelation function for observations from any one location is the same as that for observations from any other location (, ). Similarly, the spatial autocorrelation function is the same for all time points.

To the best of our knowledge, previous research on correlation separability has been concerned with models that also impose other restrictions on , which we do not. For example, both Matsuda and Yajima (2004) and Ren et al. (2013) consider models with separable correlation, but make additional assumptions that, taken together with correlation separability, implies covariance separability. A considerable amount of research has been done on models with separable covariance, which when is multivariate normal is equivalent to assuming that has a matrix normal distribution. For example, Dutilleul and Pinel-Alloul (1996) suggest the matrix normal distribution for analyzing spatio-temporal data, Soloveychik and Trushin (2016) give theoretical results for the existence and uniqueness of maximum likelihood estimates of and , and Ding and Cook (2017) develop envelope methods for regressions with matrix-valued responses.

2 The Separable Correlation Model

We assume that are realizations of independent, multivariate normal -vectors with means for some non-stochastic predictors () and parameter matrix . Since the focus of this article is the structure in , we often take (

) for simplicity, but it’s useful to allow for predictors in many applications, including our data example. Our model is thus a multivariate linear regression model with separable correlation matrix, which we for short call the separable correlation model. We refer the reader to, for example,

Rencher and Christensen (2012, chapter 10) for a discussion of the classical multivariate linear regression model with focus on .

It’s easy to show that any can be decomposed as , where is a diagonal matrix with diagonal entries equal to the positive square roots of the diagonal entries of

, i.e. the standard deviations of the elements in

(), and (). We call diagonal matrices with positive entries, like , standard deviation matrices. From writing in terms of and it follows that the assumption of separable correlation can also be written . The negative log-likelihood for observations in our model is thus, up to scaling and additive constants,

(1)

where and denotes the determinant of its matrix argument.

The multivariate linear regression model with separable covariance, which we call the separable covariance model, or the matrix normal model when (), is a special case of our model. Indeed, if the standard deviation matrix is also separable so that for some and standard deviation matrices and , then , where and

are covariance matrices defined by the last equality. Given the similarities, we view the separable covariance model as the closest competitor to our model and, hence, use it for comparisons throughout. The difference between the models is that in the the separable covariance model the variances must satisfy

; in our model there are no parametric restrictions on the variance structure. Here, is the element in row and column of , is the th diagonal entry of , and similarly for . When the rows of index sampling locations and the columns index time points, this restriction rules out the possibility that for one sampling location the variability of measurements is largest in time period one, whereas for another location it is largest in time period two, for example. In our data example we shall see that this may be too restrictive. The flexibility in variance structure offered by the separable correlation model requires parameters more than what is needed in the separable covariance model. The difference results from the fact that there are standard deviation parameters in the separable correlation model, namely the diagonal entries of . In the separable covariance model there are variance parameters, namely the diagonal entries of and , minus one due to an identifiability constraint explained below. The correlations structure is the same for both models and hence requires the same number of parameters. While the difference in total parameter counts may be substantial for some settings, it is notable that, as , the total number of covariance parameters is of the order for both models while it is of the order for an unstructured covariance matrix.

When for some covariance matrices and , then for any constant , the matrices and are also covariance matrices, and . That is, and are not identified unless additional restrictions are imposed. One possible restriction that ensures identifiability is that , which leads to the difference in parameter counts stated above. Proposition 1 shows that no additional restrictions are needed to identify the parameters in the the separable correlation model. All proofs are in the supplemental material.

Proposition 1

If for Lebesgue-almost every and has full column rank, then .

3 Maximum Likelihood

To obtain maximum likelihood estimates, is minimized jointly in . The coefficient can be profiled out of the optimization problem before considering in more detail. Differentiating with respect to , and twice with respect to , one gets

Thus, assuming is invertible, which we do throughout, is strictly convex in , and the unique partial minimizer in is the least squares estimator , where . We shall use hats to denote maximum likelihood estimates, and subscripts and to indicate whether they are from the separable correlation or covariance model, respectively. Since is the same regardless of , we omit the subscript. The rest of this article is focused on minimizing the objective function defined by and studying the properties of those minimizers, i.e. the maximum likelihood estimates of the covariance parameters.

The minimizers of cannot be expressed in closed form. In the separable covariance model where and are covariance matrices, can be found using the so called flip-flop algorithm of Dutilleul (1999), given that is large enough in comparison to and (Soloveychik and Trushin, 2016). The flip-flop algorithm is essentially a coordinate descent algorithm where the iterates are rescaled for identifiability reasons at every iteration. The algorithm we develop here, Algorithm 1, builds on similar ideas. More specifically, the idea is to update, or iteratively optimize, coordinate by coordinate over a strictly larger space than the parameter space, and then use a type of rescaling to force the iterates to lie in the original parameter space. We next walk through the execution of this idea more formally.

When deriving updates for we consider the diagonal entries individually and write, with some overloading of notation, , where denotes the diagonal matrix with diagonal entries . Treating the diagonal elements of separately, the function is formally defined on . Some steps in the derivation of the algorithm we propose for minimizing require we extend it to the space . That is, in some steps we allow and to be general covariance matrices. For simplicity, we still denote the extension by and write , for all . We also write to indicate that can be viewed as a function of . Extending to makes the parameters, or optimization variables, , , , …, , unidentifiable in the sense that we can find two, in fact infinitely many, such that , and hence . However, Proposition 2 shows that we can identify each point in with one in , where each point corresponds to exactly one , i.e. the parameters are identified.

Proposition 2

For every point , there is a unique point such that .

It is clear from the constructive proof of Proposition 2 that if we are given , the equivalent point not only exists, but is easy to compute in practice. Thus, the proposition gives us a way to update the objective function over the larger space and then find an equivalent point in the parameter space .

Let be the current iterate and let be the solution to . It is useful for this update to note that can be equivalently written as

(2)

where the are matrices defined by (). By differentiating (2), is easily seen to be

where . When has full rank, it is the unique partial minimizer of , holding the other arguments fixed at their previous iterates. The matrix is symmetric, positive semi-definite by construction; we have not been able to prove that

is positive definite with probability one in general. In practice

is positive definite for all whenever is large enough, both in our data example and simulations. From here on, unless otherwise noted, we assume that the iterates are indeed positive definite for all .

Next let be the solution to , which admits the closed form expression

The properties of are analogous to those of to so we do not discuss them in detail. Most importantly, is the unique partial minimizer of , keeping the other iterates fixed, whenever it has full rank.

The point , typically lies in , meaning and are not correlation matrices, but, by construction, . Using the results in Proposition 2, the updates and are then found by computing the (unique) point such that , and consequently . Lastly, the update () is set to the positive solution of

For cleaner notation, we ignore the iteration index when motivating this update. Differentiating with respect to gives

Multiplying through the first order condition by gives the quadratic equation . Thus, since has to be positive, , where and . This root is almost surely positive when the iterates and are positive definite, since then , and almost surely when . By the same argument, the second derivative is positive, and thus the updates are the unique partial minimizers, so almost surely, for every .

The proposed algorithm by design weakly decreases the objective function at every successful iteration. Every iteration requires of the order floating point operations, where the cubic terms come from Cholesky decompositions of the iterates of and , and the squared terms from matrix multiplications and solving linear systems when updating and . There is of course no guarantee the termination point of our algorithm is a global minimizer of , or that such a minimizer exists in general. Proposition 3 shows a maximum likelihood estimate indeed exists for large enough .

Proposition 3

When , the function has at least one global minimum.

Algorithm 1

Maximum likelihood estimation with separable correlation.    Initialize    While     Set     Set     Set     Set     Set     For      Set      Notation: , , denotes the Hadamard product, and is a tolerance parameter.

4 Inference

We are primarily interested in two hypotheses tests concerning , namely (a) : Separable correlation v. Non-separable correlation, and (b) Separable covariance v. Separable correlation. For both (a) and (b) we suggest using bootstrapped likelihood ratio tests since, as we shall see in the simulation results, classical likelihood ratio tests for (a) and (b) are in general very conservative. For testing (a) we assume that since otherwise the log-likelihood under the alternative is unbounded, implying the test would reject with probability one.

The parametric bootstrap proceeds as follows. Compute and

, the maximum likelihood estimates under the null hypothesis, using the original dataset. For test (a)

and for test (b) . Generate independent datasets with observations from the separable correlation model with and . For each of the thus simulated datasets, compute the maximum likelihood estimates , , and the maximum likelihood estimate of the covariance matrix under the null hypothesis, (). For a level test, reject the null hypothesis if the likelihood ratio obtained using the original data is smaller than the

th empirical quantile of

, where and is the multivariate normal likelihood for with respective means () and common covariance matrix .

5 Simulations

The data generating process for the presented simulations is a separable correlation model with , , (), and (). That is, both correlation matrices have a first order autoregressive structure with correlation parameter 1/2. The standard deviation matrix is either , the identity matrix, so that the separable covariance model is also correct, or the diagonal entries of are evenly spaced numbers between 0.1 and 10, which is not separable. We also considered other generating processes with, for example, compound symmetric correlation matrices or those drawn randomly from a Wishart distribution (rescaled to have unit diagonal entries), and the results were qualitatively similar. Some additional simulation results can be found in the supplemental material. The maximum number of iterations in the algorithm is set to 10000, and the tolerance parameter is set to

All estimates in Table 1 are based on simulation runs. If is the separable correlation estimator based on the th simulated dataset (), then the reported average spectral norm error is , and similarly for the other estimators, where denotes the spectral norm of its matrix argument. For all configurations where the separable covariance model is correct, the average spectral norm errors are lower for the estimates using this assumption than those for our model (Table 1). This is what one would expect since the separable covariance model is both correct and has fewer parameters than the separable correlation model. The average error for our model is at worst higher (), and at best higher (). Comparisons to the unrestricted maximum likelihood estimator are only made when , i.e. when it exists. The unrestricted estimates have at worst about five times higher error than our estimates on average (), and at best about two times higher ().

The parametric bootstrap based test has close to nominal size, , in all settings considered, whether the null hypothesis is separable covariance or separable correlation. On the other hand, the standard likelihood ratio test only has nominal size when is large in comparison to and and the null hypothesis is separable covariance. When the null hypothesis is separable correlation, the usual likelihood ratio test is very conservative for almost all configurations. Only when and is the size close to nominal, 0.12 compared to 0.05. To assess large sample validity of the test, we also ran a simulation with , , and that sample size was large enough to yield near-nominal rejection rates.

When data is generated with separable correlation, the separable covariance estimates have lower average spectral norm errors when the sample size is small in comparison to and . For larger sample sizes, the separable correlation estimates always have lower average errors. That is, there is a bias-variance trade-off in that fitting the incorrect model may yield lower errors due to higher higher precision in the estimates when the sample size is small. The empirical rejection rates for the bootstrap based test varies between 0.04 and 1.0. Power is low when either the number of observations is small in comparison to and , or one of and is small, or both. For small , here or , the power is lower when than when even though is smaller in comparison to in the latter case. However, the difference in parameter counts between the two models is in the former case but in the latter. In some settings, for example when and , the separable covariance model seems the better choice if one is interested in estimating the covariance matrix with small error, but statistical testing rejects that model with high probability. The choice between the models depends on the purpose for which they are to be used, not only on which model is correct. When is large, the bootstrap based test rejected covariance separability in all runs.

Data generated with separable covariance
10 5 5 2.74 2.20 - 0.29 0.06 - -
10 5 15 5.60 3.50 - 0.90 0.02 - -
10 15 15 5.52 3.58 - 0.97 0.03 - -
20 5 5 1.67 1.49 - 0.12 0.05 - -
20 5 15 2.70 2.34 - 0.37 0.06 - -
20 15 15 2.92 2.39 - 0.51 0.04 - -
160 5 5 0.57 0.52 1.19 0.06 0.06 0.21 0.06
160 5 15 0.87 0.81 2.44 0.06 0.05 1.00 0.05
160 15 15 0.92 0.81 - 0.09 0.07 - -
320 5 5 0.39 0.36 0.84 0.05 0.05 0.12 0.06
320 5 15 0.62 0.58 1.70 0.07 0.07 0.97 0.05
320 15 15 0.65 0.58 3.51 0.05 0.05 1.00 0.04
Data generated with separable correlation
10 5 5 15.62 13.41 - 0.87 0.50 - -
10 5 15 34.73 20.35 - 0.95 0.04 - -
10 15 15 33.86 22.43 - 1.00 0.14 - -
20 5 5 9.87 9.69 - 0.99 0.97 - -
20 5 15 15.95 13.89 - 0.82 0.36 - -
20 15 15 18.48 15.23 - 0.94 0.50 - -
160 5 5 3.28 5.16 6.53 1.00 1.00 0.23 0.06
160 5 15 5.31 5.46 13.86 1.00 1.00 1.00 0.05
160 15 15 6.01 6.37 - 1.00 1.00 - -
320 5 5 2.37 4.82 4.63 1.00 1.00 0.12 0.06
320 5 15 3.68 4.19 9.63 1.00 1.00 0.97 0.05
320 15 15 4.23 5.17 19.59 1.00 1.00 1.00 0.05
  • Columns labeled show average spectral norm errors. Subscripts indicate separable correlation, separable covariance, and unrestricted estimators. Columns with label show empirical rejection rates. The subscripts indicate the null hypotheses covariance separability and correlation separability. A second subscript

    indicates the parametric bootstrap was used. With one exception, the largest standard errors for entries in columns 4 – 10 are, respectively, 0.04, 0.001, 0.02, 0.02, 0.02, 0.02, and 0.01. The standard error for

    with , is 0.08.

Table 1: Estimation Error and Test Size

The proposed algorithm can terminate in four ways: it converges, meaning the change between two iterations in the objective function is less than some threshold , the maximum number of iterations are reached, an update of is indefinite, or an update of is indefinite. We have performed extensive simulations examining what causes the algorithm to terminate for different values of , and . A summary of these results are in the supplemental material. In general, the algorithm always converges when is slightly greater than . For example, when , , and , simulations indicate the algorithm always converges when . For , the algorithm terminates because an iterate of is indefinite in almost every simulation run. For , the algorithm either converges (about of the time), reaches the maximum number of iterations (about of the time), or terminates because and update for is indefinite. For all updates are positive definite, but sometimes the algorithm does not converge before the maximum number of iterations (1000) is reached (about of the time). The pattern is similar for different configuration of our model. If we take reaching the maximum number of iterations as a sign of there not existing a unique maximum of the log-likelihood, these results are qualitatively consistent with theory developed by Soloveychik and Trushin (2016), who show that, with separable covariance and , no unique minimum exists if , whereas if a unique minimum exists almost surely, and the flip-flop algorithm converges to this unique minimum almost surely from any starting point. For in the gap between these two cutoffs, a unique minimum exists with positive probability strictly less than one.

6 Data Example

We illustrate our model using data on dissolved oxygen concentration in the Mississippi River and are interested estimating the covariance matrix for measurements from a number of different areas and time-points. Additional details on the modeling choices can be found in the supplemental material. We consider a simple mean structure where the regressors are those resulting from fitting cubic splines in the year index; this is done to remove systematic trends from the data. Since the choice of predictors does not affect the estimation procedure for , we do not discuss it in detail.

Our data consist of years (1994 – 2002, 2004 – 2015) of quarterly measurements from areas of the Upper Mississippi River. Observations from winter are excluded since water is typically mostly frozen in the northernmost sampling areas in winter, so . The areas represent sampling strata-river reach combinations; the data were collected by the US Army Corps of Engineers’ Upper Mississippi River Restoration Program Long Term Resource Monitoring element (Johnson and Hagerty, 2008).

We treat dissolved oxygen measurements as independent among years. In the raw data there are several measurements for every year, season, and area. For convenience, we model the sample means of these measurements. That is, we let (, , ) denote the sample mean of the dissolved oxygen measurement in season , area , and year . For the sample means data, the separable correlation model says that the correlations between mean dissolved oxygen concentrations in spring, summer, and fall are the same for all areas. Additionally, the correlations between mean concentrations from different areas are the same in all seasons. Without imposing additional restrictions, other than independence among years, the covariance matrix would still be of size in this example, and we only have years of data. Thus, in this example, correlation separability is motivated both as a parsimonious parameterization enabling maximum likelihood estimation, and as a reflection of the spatio-temporal structure of the data. It is also of scientific interest to examine whether, in fact, the separable covariance model can be applied to our data.

We ran the the parametric bootstrap test of Separable covariance v. Separable correlation with

. In none of these datasets did we observe a likelihood ratio test statistic larger than what was observed in the real data, suggesting a

value less than . The parametric bootstrap of course relies on the model being a good approximation for the data generating process. It is outside the scope of this article to examine this rigorously, but as a quick check we plotted quantiles of the standardized residuals () against those of a normal distribution. This revealed a possible thick left tail but otherwise good agreement between theoretical and empirical quantiles; the plot is excluded for brevity.

The null hypothesis of correlation separability cannot be formally tested with a likelihood ratio test since . However, inspecting the residual covariance matrix and comparing it to the estimate from the separable correlation model may still give some informal indications of how well the model fits. In the heatmap of the residual covariance matrix in Fig. 1, there is a larger variation in off-block-diagonal covariances than in the heatmap of the separable correlation estimate. The block-diagonal structure with three main blocks is more pronounced in the plot of the estimate with separable correlation. These differences may indicate either a lack of fit or that the residual covariance matrix is picking up on noise in the data. The three blocks, from lower left to upper right, correspond to measurements from spring, summer, and fall. The lower off-block-diagonal covariances indicate dependence between observations from different seasons is in weak in general. The estimated variance of the th response element is particularly large, this can be seen also in Fig. 2. At the sampling location corresponding to this element, the river exhibits an unusual combination of deep water and low flow; measurements are also taken in fall when flow is generally lower than in spring and summer (J. Rogala, personal communication, 29 Jan 2018). Together, these characteristics might lead to larger swings in average concentrations of dissolved oxygen among years than for other reach-stratum combinations in fall, thus offering a possible physical explanation for our finding. Assuming it is a real finding, it is an important discovery that is inconsistent with separable covariance where any season effect on the variance of measurements has to be the same for every sampling location. Plots of the variance estimates in Fig. 2 reveal other elements of the response vector where the difference between the models is even greater. For example, the separable correlation estimates indicate variation is particularly large at sampling location 11 in spring, that is, for element 11 of the response vector. Although outside the scope of this paper, investigating further why variability in measurements is particularly large at this location and season can potentially lead to interesting scientific findings that would not be discovered if assuming separable covariance.

(a) Residual covariance matrix
(b) Estimate with separable correlation
Figure 1: Estimated covariances for dissolved oxygen data. Opacity indicates covariance.
Figure 2: Estimated variances for dissolved oxygen data. Dots and triangles correspond to estimates from separable correlation and separable covariance models, respectively. The three panels are, from left to right, spring, summer, and fall.

7 Discussion

In settings where the data have a two-dimensional structure, the separable correlation model is a natural alternative to consider; it occupies a middle ground between separable and unstructured covariance.

By updating optimization variables in blocks and using rescaling we circumvent the need for more advanced, constrained optimization methods over sets of correlation matrices. Our algorithm also has closed form updates at every iteration, making it computationally convenient. Lu and Zimmerman (2005) mention that the flip-flop algorithm is between 50 and 5000 times faster than a Newton–Raphson algorithm applied to their problem with separable covariance, and the difference increases with the dimension of the optimization problem. Given the similarities, we expect our algorithm to also outperform second order descent algorithms in terms of wall clock time, even though updating and rescaling and is more time-consuming in our model than in the separable covariance model. Our algorithm can be extended to handle constraints on the correlation matrices and by simply changing the corresponding update, without having to change anything else. Code for an R package (R Core Team, 2018) implementing the proposed algorithm for fitting the separable correlation model is available at Ekvall (2018).

Deriving analytical bounds for when minima exist and are unique in our model, similar to those in Soloveychik and Trushin (2016), is an avenue for future research. Their proof does not immediately carry over to our setting since the negative log-likelihood of our model is not geodesically convex in the sense that of the separable covariance model is.

Further years of sampling would permit formal testing of the validity of the assumed separability against an unstructured covariance matrix in our data example. Restrictions on either the spatial or temporal autocorrelation could also be incorporated in such a study, building on the work of Szczepańska-Álvarez et al. (2017), as could the inclusion of habitat predictors and an analysis of how the assumed covariance structure affects inference about the regression coefficient .

Acknowledgments

The authors thank Aaron Molstad and Daniel Eck for suggestions that led to the improvement of this article. Ekvall gratefully acknowledges support by the American – Scandinavian Foundation and the U.S. Geological Survey. The dissolved oxygen data used in this study are available from the U.S. Army Corps of Engineers’ Upper Mississippi River Restoration Program at https://www.umesc.usgs.gov/data_library/water_quality/water1_query.shtml. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

Supplementary material

The supplementary material includes proofs of propositions, additional simulation results, more details regarding the data example, and convergence diagnostics for the proposed algorithm.

References

  • Cressie and Wikle (2011) Cressie, N. and Wikle, C. K. (2011). Statistics for Spatio-Temporal Data. Wiley.
  • Ding and Cook (2017) Ding, S. and Cook, R. D. (2017). Matrix variate regressions and envelope models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(2):387–408.
  • Dutilleul (1999) Dutilleul, P. (1999). The MLE algorithm for the matrix normal distribution. Journal of Statistical Computation and Simulation, 64(2):105–123.
  • Dutilleul and Pinel-Alloul (1996) Dutilleul, P. and Pinel-Alloul, B. (1996). A doubly multivariate model for statistical analysis of spatio-temporal environmental data. Environmetrics, 7(6):551–565.
  • Ekvall (2018) Ekvall, K. O. (2018). kpcor. https://github.com/koekvall/kpcor.
  • Fan et al. (2016) Fan, J., Liao, Y., and Liu, H. (2016). An overview of the estimation of large covariance and precision matrices. The Econometrics Journal, 19(1):C1–C32.
  • Johnson and Hagerty (2008) Johnson, B. L. and Hagerty, K. H. (2008). Status and trends of selected resources in the Upper Mississippi River system. techreport, U.S. Geological Survey, Upper Midwest Environmental Sciences Center. https://pubs.usgs.gov/mis/LTRMP2008-T002/pdf/LTRMP2008-T002_web.pdf (accessed 18 April 2018).
  • Lu and Zimmerman (2005) Lu, N. and Zimmerman, D. L. (2005). The likelihood ratio test for a separable covariance matrix. Statistics & Probability Letters, 73(4):449–457.
  • Matsuda and Yajima (2004) Matsuda, Y. and Yajima, Y. (2004). On testing for separable correlations of multivariate time series. Journal of Time Series Analysis, 25(4):501–528.
  • R Core Team (2018) R Core Team (2018). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  • Ren et al. (2013) Ren, C., Sun, D., and Sahu, S. K. (2013). Objective Bayesian analysis of spatial models with separable correlation functions. Canadian Journal of Statistics, 41(3):488–507.
  • Rencher and Christensen (2012) Rencher, A. C. and Christensen, W. F. (2012).

    Methods of Multivariate Analysis

    .
    Wiley.
  • Soloveychik and Trushin (2016) Soloveychik, I. and Trushin, D. (2016). Gaussian and robust Kronecker product covariance estimation: Existence and uniqueness. Journal of Multivariate Analysis, 149:92–113.
  • Szczepańska-Álvarez et al. (2017) Szczepańska-Álvarez, A., Hao, C., Liang, Y., and von Rosen, D. (2017). Estimation equations for multivariate linear models with Kronecker structured covariance matrices. Communications in Statistics – Theory and Methods, 46(16):7902–7915.