# On the Relationship between Conditional (CAR) and Simultaneous (SAR) Autoregressive Models

We clarify relationships between conditional (CAR) and simultaneous (SAR) autoregressive models. We review the literature on this topic and find that it is mostly incomplete. Our main result is that a SAR model can be written as a unique CAR model, and while a CAR model can be written as a SAR model, it is not unique. In fact, we show how any multivariate Gaussian distribution on a finite set of points with a positive-definite covariance matrix can be written as either a CAR or a SAR model. We illustrate how to obtain any number of SAR covariance matrices from a single CAR covariance matrix by using Givens rotation matrices on a simulated example. We also discuss sparseness in the original CAR construction, and for the resulting SAR weights matrix. For a real example, we use crime data in 49 neighborhoods from Columbus, Ohio, and show that a geostatistical model optimizes the likelihood much better than typical first-order CAR models. We then use the implied weights from the geostatistical model to estimate CAR model parameters that provides the best overall optimization.

## Authors

• 6 publications
• 6 publications
• 11 publications
05/11/2021

### Estimating accurate covariance matrices on fitted model parameters

The accurate computation of the covariance matrix of fitted model parame...
07/22/2017

### SAR Image Colorization: Converting Single-Polarization to Fully Polarimetric Using Deep Neural Networks

A deep neural networks based method is proposed to convert single polari...
12/10/2019

### Assessing the Search and Rescue Domain as an Applied and Realistic Benchmark for Robotic Systems

Aim of this paper is to provide a review of the state of the art in Sear...
09/09/2011

### Ignorability in Statistical and Probabilistic Inference

When dealing with incomplete data in statistical learning, or incomplete...
11/16/2017

### Extending species-area relationships (SAR) to diversity-area relationships (DAR)

I extend the traditional SAR, which has achieved status of ecological la...
01/26/2018

### Detecting Changes in Fully Polarimetric SAR Imagery with Statistical Information Theory

Images obtained from coherent illumination processes are contaminated wi...
02/15/2022

### Judging a socially assistive robot (SAR) by its cover; The effect of body structure, outline, and color on users' perception

Human-SAR (socially assistive robot) relationships vary by the context o...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Cressie (1993, p. 8) divides statistical models for data collected at spatial locations into two broad classes: 1) geostatistical models with continuous spatial support, and 2) lattice models, also called areal models (Banerjee et al., 2004), where data occur on a (possibly irregular) grid, or lattice, with a countable set of nodes or locations. The two most common lattice models are the conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, both notable for sparseness of their precision matrices. These autoregressive models are ubiquitous in many fields, including disease mapping (e.g., Clayton and Kaldor, 1987; Lawson, 2013), agriculture (Cullis and Gleeson, 1991; Besag and Higdon, 1999), econometrics (Anselin, 1988; LeSage and Pace, 2009), ecology (Lichstein et al., 2002; Kissling and Carl, 2008), and image analysis (Besag, 1986; Li, 2009). CAR models form the basis for Gaussian Markov random fields (Rue and Held, 2005) and the popular integrated nested Laplace approximation methods (INLA, Rue et al., 2009), and SAR models are popular in geographic information systems (GIS) with the GeoDa software (Anselin et al., 2006). Hence, both CAR and SAR models serve as the basis for countless scientific conclusions. Because these are the two most common classes of models for lattice data, it is natural to compare and contrast them. There has been sporadic interest in studying the relationships between CAR and SAR models (e.g., Wall, 2004), and how one model might or might not be expressed in terms of the other (Haining, 1990; Cressie, 1993; Martin, 1987; Waller and Gotway, 2004), but there is little clarity in the existing literature on the relationships between these two classes of autoregressive models.

Our goal is to clarify, and add to, the existing literature on the relationships between CAR and SAR covariance matrices, by showing that any positive-definite covariance matrix for a multivariate Gaussian distribution on a finite set of points can be written as either a CAR or a SAR covariance matrix, and hence any valid SAR covariance matrix can be expressed as a valid CAR covariance matrix, and vice versa. This result shows that on a finite dimensional space, both SAR and CAR models are completely general models for spatial covariance, able to capture any positive-definite covariance. While CAR and SAR models are among the most commonly-used spatial statistical models, this correspondence between them, and the generality of both models, has not been fully described before now. These results also shed light on some previous literature.

This paper is organized as follows: In Section 2, we review SAR and CAR models and lay out necessary conditions for these models. In Section 3, we provide theorems that show how to obtain SAR and CAR covariance matrices from any positive definite covariance matrix, which also establishes the relationship between CAR and SAR covariance matrices. In Section 4, we provide examples of obtaining SAR covariance matrices from a CAR covariance matrix on fabricated data, and a real example for obtaining a CAR covariance matrix for a geostatistical covariance matrix. Finally, in Section 5, we conclude with a detailed discussion of the incomplete results of previous literature.

## 2 Review of SAR and CAR models

In what follows, we denote matrices with bold capital letters, and their th row and th column with small case letters with subscripts ; for example, the th element of is

. Vectors are denoted as lower case bold letters. Let

be a vector of random variables at the nodes of a graph (or junctions of a lattice). The edges in the graph, or connections in the lattice, define neighbors, which are used to model spatial dependency.

### 2.1 SAR Models

Consider the SAR model with mean zero. An explicit autocorrelation structure is imposed,

 Z=BZ+\boldmathν, (1)

where the spatial dependence matrix, , is relating to itself, and , where is diagonal with positive values. These models are generally attributed to Whittle (1954). Solving for , note that sites cannot depend on themselves so will have zeros on the diagonal, and that must exist (Cressie, 1993; Waller and Gotway, 2004), where

is the identity matrix. Then

, where

 ΣSAR=(I−B)−1Ω(I−BT)−1; (2)

see, for example, Cressie (1993, p. 409). The spatial dependence in the SAR model is due to the matrix which causes the simultaneous autoregression of each random variable on its neighbors. Note that does not have to be symmetric because it does not appear directly in the inverse of the covariance matrix (i.e., precision matrix). The covariance matrix must be positive definite. For SAR models, it is enough that is nonsingular (i.e., that exists), because the quadratic form, writing it as , with containing positive diagonal values, ensures will be positive definite.

In summary, the following conditions must be met for in (2) to be a valid SAR covariance matrix:

[labelindent=1cm, labelwidth=1cm, nosep]

S1

is nonsingular,

S2

is diagonal with positive elements, and

S3

.

### 2.2 CAR models

The term “conditional,” in the CAR model, is used because each element of the random process is specified conditionally on the values of the neighboring nodes. Let be a random variable at the th location, again assuming that the expectation of is zero for simplicity, and let be its realized value. The CAR model is typically specified as

 Zi|z−i∼N⎛⎝∑∀ci,j≠0ci,jzj,mi,i⎞⎠, (3)

where is the vector of all where , is the spatial dependence matrix with as its th element, , and is a diagonal matrix with positive diagonal elements . Note that may depend on the values in the th row of . In this parameterization, the conditional mean of each

is weighted by values at neighboring nodes. The variance component,

, often varies with node , and thus is generally nonstationary. In contrast to SAR models, it is not obvious that (3

) leads to a full joint distribution for

. Besag (1974) used Brook’s lemma (Brook, 1964) and the Hammersley-Clifford theorem (Hammersley and Clifford, 1971; Clifford, 1990) to show that, when is positive definite, , with

 ΣCAR=(I−C)−1M. (4)

must be symmetric, requiring

 ci,jmi,i=cj,imj,j,∀i,j. (5)

Most authors describe CAR models as the construction (3), with condition that must be positive definite given the symmetry condition (5). However, a more specific statement is possible on the necessary conditions for , making a comparable condition to 2.1 for SAR models. We provide a novel proof, Proposition 3 in the Appendix, showing that if is positive definite along with (5) (forcing symmetry on ), it is only necessary for

to have positive eigenvalues for

to be positive definite.

In summary, the following conditions must be met for in (4) to be a valid CAR covariance matrix:

[labelindent=1cm, labelwidth=1cm, nosep]

C1

has positive eigenvalues,

C2

is diagonal with positive elements,

C3

, and

C4

.

### 2.3 Weights Matrices

In practice, and are usually used to construct valid SAR and CAR models, where is a weights matrix with when locations and are neighbors, otherwise . Neighbors are typically pre-specified by the modeler. When and are neighbors, we often set , or use row-standardization so that ; that is, dividing each row in unstandardized by yields an asymmetric row-standardized matrix that we denote as . For CAR models, define as the diagonal matrix with , then (5) is satisfied. The row-standardized CAR model can be written equivalently as

 Σ+=σ2(I−ρcW+)−1M+=σ2(diag(W1)−ρcW)−1, (6)

where is a vector of all ones, is an overall variance parameter, and diag() creates a diagonal matrix from a vector. A special case of the CAR model, called the intrinsic autoregressive model (IAR) (Besag and Kooperberg, 1995), occurs when , but the covariance matrix does not exist, so we do not consider it further.

There can be confusion on how is constrained for SAR and CAR models, which we now clarify. Suppose that has all real eigenvalues. Let be the set of eigenvalues of , and let be the set of eigenvalues of . Then, in the Appendix (Proposition 4), we show that . First, notice that if , then for all . Hence, will be nonsingular for all whenever , which is sufficient for SAR model condition 2.1. Note that it is possible for all to be positive, even when has some zero eigenvalues (), and thus our result is more general than that of Li et al. (2007), who only consider the case when all . If any , then at least two are nonzero because . If at least two eigenvalues are nonzero, then , the smallest eigenvalue of , must be less than zero, and , the largest eigenvalue of , must be greater than zero. Then ensures that has positive eigenvalues (Appendix, Proposition 4) and satisfies condition 2.2 for CAR models. For SAR models, if has positive eigenvalues it is also nonsingular, so provides a sufficient (but not necessary) condition for condition 2.1.

In practice, the restriction is often used for both CAR and SAR models. When considering , the restriction becomes , where usually . Wall (2004) shows irregularities for negative values near the lower bound for both SAR and CAR models, thus many modelers simply use . In fact, in many cases, only positive autocorrelation is expected, so a further restriction is used where (e.g., Li et al., 2007). For these constructions, typically has more positive marginal autocorrelation with increasing positive values, and more negative marginal autocorrelation with decreasing negative values (Wall, 2004). There has been little research on the behavior of outside of these limits for SAR models.

Our goal is to develop relationships that allow a CAR covariance matrix, satisfying conditions 2.2 - 2.2, to be obtained from a SAR covariance matrix, satisfying conditions 2.1 - 2.1, and vice versa. We develop these in the next section, and, in the Discussion and Conclusions section, we contrast our results to the incomplete results of previous literature.

## 3 Relationships between CAR and SAR models

Assume a covariance matrix for a SAR model as given in (2), and a covariance matrix for a CAR model as given in (4). We show that any zero-mean Gaussian distribution on a finite set of points, , can be written with a covariance matrix parameterized either as a CAR model, , or as a SAR model, . It is straightforward to generalize to the case where the mean is nonzero so, for simplicity of notation, we use the zero mean case. A corollary is that any CAR covariance matrix can be written as a SAR covariance matrix, and vice versa. Before proving the theorems, some preliminary results are useful.

###### Proposition 1.

If is a square diagonal matrix, and is a square matrix with zeros on the diagonal, then, provided the matrices are conformable, both and have zeros on the diagonal.

###### Proof.

We omit the proof because it is apparent from the algebra of matrix products. ∎

###### Proposition 2.

Let , , and be square matrices. If , and and have inverses, then has a unique inverse.

###### Proof.

Because has an inverse, , and because has an inverse, . is unique because it is square and full-rank (e.g., Harville, 1997, p. 80). ∎

We now prove that both SAR and CAR covariance matrices are sufficiently general to represent any finite-dimensional positive-definite covariance matrix.

###### Theorem 1.

Any positive definite covariance matrix can be expressed as the covariance matrix of a SAR model , (2), for a (non-unique) pair of matrices and .

###### Proof.

We consider a constructive proof and show that the matrices and satisfy conditions 2.1 - 2.1.

1. Write , and suppose that is full rank with positive eigenvalues. Note that is not unique. A Cholesky decomposition could be used, where is lower triangular, or a spectral (eigen) decomposition could be used, where , with

containing orthonormal eigenvectors and

containing eigenvalues on the diagonal and zeros elsewhere. Then , where the diagonal matrix contains reciprocals of square roots of the eigenvalues in .

2. Decompose into where is diagonal and has zeros on the diagonal. Then by construction.

3. Then set

 Ω−1=GG  and  BT=PG−1. (7)

Note that because has positive eigenvalues, then , and because is diagonal with , exists.

Then , expressed in SAR form (2). The matrices and satisfy 2.1 - 2.1, as follows.

1. Note that , so and . Then, by Proposition 2, and exist.

2. Because is diagonal, is diagonal with .

3. By Proposition 1, because . ∎

###### Theorem 2.

Any positive-definite covariance matrix can be expressed as the covariance matrix of a CAR model , (4), for a unique pair of matrices and (Cressie, 1993, p. 434).

###### Proof.

We add an explicit, constructive proof of the result given by Cressie (1993, p. 434) by showing that matrices and are unique and satisfy conditions 2.2 - 2.2.

1. Let and decompose it into , where is diagonal with elements (the diagonal elements of the precision matrix ), and has zeros on the diagonal () and off-diagonals equal to .

2. Set

 C=D−1R  and  M=D−1. (8)

Then , with expressed in CAR form (4). The matrices and , from (8), are uniquely determined by because and have unique inverses, and satisfy 2.2 - 2.2, as follows.

1. is strictly diagonal with positive values, so and are positive definite. By hypothesis, , and hence are positive definite. Then , so by Proposition 3 in the Appendix, has positive eigenvalues.

2. , and because is positive definite, we have that . Thus, each . By construction, for .

3. By Proposition 2.1, because .

4. For , we have that . As , we have that

 ci,jmi,i=d−1i,iri,jd−1i,i=ri,j=−qi,j.

Because is symmetric, and . ∎

Having shown that any positive definite matrix can be expressed as either the covariance matrix of a CAR model or the covariance matrix of a SAR model, we have the following corollary.

###### Corollary 1.

Any SAR model can be written as a unique CAR model, and any CAR model can be written as a non-unique SAR model.

###### Proof.

The proof follows directly by first noting that a SAR model yields a positive-definite covariance matrix, and applying Theorem 2, and then noting that a CAR model yields a positive-definite covariance matrix, and applying Theorem 1. ∎

The following corollary gives more details on the non-unique nature of the SAR models.

###### Corollary 2.

Any positive-definite covariance matrix can be expressed as one of an infinite number of matrices that define the SAR covariance matrix in (2).

###### Proof.

Write as in Theorem 1. Let be a Givens rotation matrix (Golub and Van Loan, 2013), which is a sparse orthonormal matrix that rotates angle through the plane spanned by the and axes. The elements of are as follows. For . For and . All other entries of are equal to zero. Notice that , where . A SAR covariance matrix can be developed as readily for as for in the proof of Theorem 1. Any of the infinite values of will result in a unique , leading to a different , and a different matrix in (7), but yielding the same positive-definite covariance matrix . ∎

### 3.1 Implications of Theorems and Corollaries

Note that for Corollary 2, additional matrices that define a fixed positive-definite covariance matrix in Corollary 2 could also be obtained by repeated Givens rotations. For example, let for angles and . Then a new can be developed for this just as readily as those in the proof to Corollary 2. We use this idea extensively in the examples.

Theorem 1 helps clarify the use of . Authors often write the SAR model as , assuming that in (2). In the proofs to Theorem 1 and Corollary 2, this requires finding with ones on the diagonal so that . It is interesting to consider if one can always find such , which would justify the practice of using the simpler form, , for SAR models. We leave that as an open question.

In Section 2.3, we discussed how most CAR and SAR models are constructed by constraining in . Consider Theorem 1, where is a lower-triangular Cholesky decomposition. Then has zero diagonals and is strictly lower triangular, and so is strictly lower triangular. In this construction, all of the eigenvalues of are zero. Thus, for SAR models, there are unexplored classes of models that do not depend on the typical construction .

Most CAR and SAR models are developed such that and are sparse matrices, containing mostly zeros, but containing positive elements whose weights depend locally on neighbors. Although we demonstrated how to obtain a CAR covariance matrix from a SAR covariance matrix, and vice versa, there is no guarantee that using a sparse in a CAR model will yield a sparse in a SAR model, or vice versa. We explore this idea further in the following examples.

## 4 Examples

We provide two examples, one where we illustrate Theorem 1 primarily, and a second where we use Theorem 2. In the first, we fabricated a simple neighborhood structure and created a positive definite matrix by a CAR construction. Using Givens rotation matrices, we then obtained various non-unique SAR covariance matrices from the CAR covariance matrix. We also explore sparseness in for SAR models when they are obtained from sparse for CAR models.

For a second example, we used real data on neighborhood crimes in Columbus, Ohio. We model the data with the two most common CAR models, using a first-order neighborhood model where is both unstandardized and row-standardized. Then, from a positive-definite covariance matrix obtained from a geostatistical model, we obtain the equivalent and unique CAR covariance matrix. We use the weights obtained from the geostatistical covariance matrix to allow further CAR modeling, finding a better likelihood optimization than both the unstandardized and row-standardized first-order CAR models.

Consider the graph in Figure 1a, which shows an example of neighbors for a CAR model. Using one to indicate a neighbor, and zeros elsewhere, the matrix was used to create the row-standardized matrix in (6). Values of , where , are shown graphically in Figure 1b. For the resulting covariance matrix, in (6), the Cholesky decomposition was used to create as in Theorem 1. Using (7) in Theorem 1, the weights matrix created from is shown in Figure 1c. For the same covariance matrix , we also used the spectral decomposition to create as in Theorem 1. The weights matrix created from this , using (7) in Theorem 1, is shown in Figure 1d. Note that the matrix in Figure 1d is less sparse than in Figure 1c, although they both yield exactly the same covariance matrix by the SAR construction (2), which we verified numerically. Figure 1c also verifies our comments in Section 3.1; that there exists some where all eigenvalues are zero (because all diagonal elements are zero).

We also sought to transform the matrix in Figure 1d to a sparser form using the proof to Corollary 2 and the Given’s rotations. For a vector of length , an index of sparseness (Hoyer, 2004) is

 sparseness(x)=√n−∑i|xi|√∑ix2i√n−1,

which ranges from zero to one. Ignoring the dimensions of a matrix, we create the matrix function

 f(B)=∑i,j|bi,j|√∑i,jb2i,j,

which is a measure of the fullness of a matrix. We propose an iterative algorithm to minimize for orthonormal Givens rotations as explained in Corollary 2. Let , where used the spectral decomposition of as in the proof of Theorem 1, and is a Givens rotation matrix as in the proof of Corollary 2. Denote as the value of that minimizes when is created by decomposing into and (as in in Theorem 1), while constraining to values satisfying . Then , where is the first iteration. For the second iteration, let be the value that minimizes for created from , and hence for , . We cycled through and for each iteration in a coordinate decent minimization of . We cycled through all of and eight times for a total of iterations. The value of for each iteration is plotted in Figure 1e and the final matrix is given in Figure 1f. Although we did not achieve the sparsity of Figure 1c, we were able to increase sparseness from the starting matrix in Figure 1d. Note that the matrix depicted in Figure 1f yields exactly the same covariance matrix as the matrices shown in Figures 1c,d. There are undoubtedly better ways to minimize , such as simulated annealing (Kirkpatrick et al., 1983), and there may be alternative optimization criteria. We do not pursue these here. Our goal was to show that it is possible to explore many configurations of matrix weights in SAR models, which produce equivalent covariance matrices, by using orthonormal Givens rotations of the matrix.

### 4.1 Columbus Crime Data

The Columbus data are found in the spdep package (Bivand et al., 2013; Bivand and Piras, 2015) for R (R Core Team, 2016). Figure 2 shows 49 neighborhoods in Columbus, Ohio. We used residential burglaries and vehicle thefts per thousand households in the neighborhood (Anselin, 1988, Table 12.1, p. 189)

as the response variable. Spatial pattern among neighborhoods appeared autocorrelated (Figure

2), with higher crime rates in the more central neighborhoods. When analyzing rate data, it is customary to account for population size (e.g., Clayton and Kaldor, 1987), which affects the variance of the rates. However, for illustrative purposes, we used raw rates. A histogram of the data appeared approximately bell-shaped, thus we assumed a Gaussian distribution with a covariance matrix containing autocorrelation among locations.

First-order neighbors were also taken from the spdep package for R, and are shown by white lines in Figure 2. Using a one to indicate a neighbor, and zero otherwise, we denote the matrix of weights as , and the CAR precision matrix has and in (4). Using the eigenvalues of , the bounds for were -0.335 0.167. We added a constant independent diagonal component, (also called the nugget effect in geostatistics), so the covariance matrix was . Denote the crime rates as . We assumed a constant mean, so , where is a vector of all ones. Let be minus 2 times the restricted maximum likelihood equation (REML, Patterson and Thompson, 1971, 1974) for the crime data, where the set of covariance parameters is . We optimized the likelihood using REML and obtained 388.83. Recall that CAR models have nonstationary variances and covariances (e.g., Wall, 2004). The marginal variances of the estimated model are shown in Figure 3a, and the marginal correlations are shown in Figure 4a.

We also optimized the likelihood using the row-standardized weights matrix, in (6), which we denote . In this case, the CAR precision matrix has , , and in (4). Again we added a nugget effect, so . For the set of covariance parameters , we obtained 397.25. This shows that the unstandardized weights matrix provides a substantially better likelihood optimization than . The marginal variances of the row-standardized model are shown in Figure 3b, and the marginal correlations are shown in Figure 4b. The difference between and indicates that the weights matrix has a substantial effect for these data.

We optimized the likelihood with a geostatistical model next using a spherical autocorrelation model. Denote the geostatistical correlation matrix as , where

 si,j=[1−1.5(ei,j/α)+0.5(ei,j/α)3]I(di,j<α),

and is the indicator function, equal to one if its argument is true, otherwise it is zero, and is Euclidean distance between the centroids of the th and th polygons in Figure 2. We included a nugget effect, so . For the set of covariance parameters , we obtained 374.61. The geostatistical model provides a substantially better optimized likelihood than either the unstandardized or row-standardized CAR model. The marginal variances of geostatistical models are stationary (Figure 3c). The estimated range parameter, , is shown by the lower bar in Figure 2. Any locations separated by a distance greater than that shown by the bar will have zero correlation (Figure 4c).

It appears that the geostatistical model provides a much better optimized likelihood than the two most commonly-used CAR models. Is it possible to find a CAR model to compete with the geostatistical model? Using Theorem 2, we created and as in (8) from the positive definite covariance matrix from the geostatistical model, . Here, we have a CAR representation that is equivalent to the spherical geostatistical model. Letting , and using , we optimized for . For to be positive definite, , -1.104 1.013, and . Because is in the parameter space, we can do no worse than the spherical geostatistical model. In fact, upon optimizing, we obtained 373.95, where 0.941, 1.01, and 0, a slightly better optimization than the spherical geostatistical model. The marginal variances for this geostatistical-assisted CAR model are shown in Figure 3d, and the marginal correlations are shown in Figure 4d. Note the rather large changes from Figure 3c to Figure 3d, and from Figure 4c to Figure 4d, with seemingly minor changes in , from 1 to 0.941, and in , from 1 to 1.01. Others have documented rapid changes in CAR model behavior near the parameter boundaries, especially for (Besag and Kooperberg, 1995; Wall, 2004).

## 5 Discussion and Conclusions

Haining (1990, p. 89) provided the most comprehensive comparison of the mathematical relationships between CAR and SAR models. He provided several results that we restate using notation from Sections 2.1 and 2.2, and show that some are incorrect or incomplete.

In an attempt to create a CAR covariance matrix from a SAR covariance matrix, assume that is a SAR covariance matrix satisfying conditions 2.1-2.1 and in (2). Let and be symmetric in (4) [which omits the important case (6)]. Then setting SAR and CAR covariances matrix equal to each other,

 (I−C)−1=[(I−B)(I−BT)]−1=(I−B−BT+BBT)−1, (9)

and Haining (1990) claims that can be obtained from by setting

 C=B+BT−BBT, (10)

which is repeated in texts by Waller and Gotway (2004, p. 372) and Schabenberger and Gotway (2005, p. 339), and in the literature (e.g., Dormann et al., 2007). However, aside from the lack of generality due to assumptions , , and symmetric , we note that (10) is incomplete and too limited to be useful, as given in the following remark.

###### Remark 1.

Condition 2.2 in Section 2.2 is not satisfied for in (10) except when contains all zeros.

###### Proof.

Because has zeros on the diagonal, will have zeros on the diagonal. Denote as the th row of . Then the th diagonal element of will be the dot product , which will be zero only if all elements of are zero. Hence, will have zeros on the diagonal only if contains all zeros. ∎

In an attempt to create a SAR covariance matrix from a CAR covariance matrix, assume the same conditions as for (9), and that is a CAR covariance matrix satisfying conditions 2.2-2.2. Let , where is a Cholesky decomposition. Haining (1990) suggested and setting equal to . However, this is incomplete because condition 2.1 in Section 2.1 will be satisfied only if has all ones on the diagonal, which also has limited use.

For another approach to relate SAR and CAR covariance matrices, Haining (1990) described the model , where . Then . Now let , , and (this appears to originate in Martin (1987)). The constructed model is really a SAR model except that it violates condition 2.1 by allowing . Alternatively, this can be seen as an attempt to create a SAR model from a CAR model by assuming an inverse CAR covariance matrix for the error structure of the SAR model, which gains nothing. Because these arguments are unconvincing, and other authors argue that one cannot go uniquely from a CAR to a SAR (e.g., Mardia, 1990), we can find no further citations for the arguments of Haining (1990) on obtaining a CAR covariance matrix from a SAR covariance matrix.

Cressie (1993, p. 409-410) provided a demonstration of how a SAR covariance matrix with first-order neighbors in leads to a CAR covariance matrix with third-order neighbors in , and claims that, generally, there will be no equivalent SAR covariance matrices for first and second-order CAR covariance matrices. However, our demonstration in Figure 1c shows that a sparse may be obtained from a sparse CAR model, although it is asymmetric and may not have the usual neighborhood interpretation.

From Section 2.3, we showed that pre-specified weights are often scaled by , and that is often constrained by the eigenvalues of . However, we have also discussed in Section 3.1 and Figure 1c, that weights can be chosen so that all eigenvalues are zero, for either CAR or SAR models. We have little information or guidance for developing models where all eigenvalues are zero, and this provides an interesting topic for future research.

Wall (2004) provided a detailed comparison on properties of marginal correlation for various values of when or are parameterized as and , respectively, but did not develop mathematical relationships between CAR and SAR models. Lindgren et al. (2011) showed that approximations to point-referenced geostatistical models based on a finite element basis expansion can be expressed as CAR models. In his discussion of the same, Kent (2011) noted that, for a given geostatistical model of the Matern class, one could construct either a CAR or SAR model that would approximate the Matern model. This indicates a correspondence between CAR and SAR models when used as approximations to continuous-space processes, but does not address the relationship between CAR and SAR models on a native areal support.

Our literature review and discussion showed that there have been scattered efforts to establish mathematical relationships between CAR and SAR models, and some of the reported relationships are incomplete on the conditions for those relationships. With Theorems 1 and 2 and Corollary 1, we demonstrated that any zero-mean Gaussian distribution on a finite set of points, , with positive-definite covariance matrix , can be written as either a CAR or a SAR model, with the important difference that a CAR model is uniquely determined from but a SAR model is not so uniquely determined. This equivalence between CAR and SAR models can also have practical applications. In addition to our examples, the full conditional form of the CAR model allows for easy and efficient Gibbs sampling (Banerjee et al., 2004, p. 163) and fully conditional random effects (Banerjee et al., 2004, p. 86). However, spatial econometric models often employ SAR models (LeSage and Pace, 2009), so easy conversion from SAR to CAR models may offer computational advantages in hierarchical models and provide insight on the role of fully conditional random effects. We expect future research will extend our findings on relationships between CAR and SAR models and explore novel applications.

## Acknowledgments

This research began from a working group on network models at the Statistics and Applied Mathematical Sciences (SAMSI) 2014-15 Program on Mathematical and Statistical Ecology. The project received financial support from the National Marine Fisheries Service, NOAA. The findings and conclusions of the NOAA author(s) in the paper are those of the NOAA author(s) and do not necessarily represent the views of the reviewers nor the National Marine Fisheries Service, NOAA. Any use of trade, product, or firm names does not imply an endorsement by the U.S. Government.

## References

• Anselin (1988) Anselin, L. (1988), Spatial Econometrics: Methods and Models, Dordrecht, the Netherlands: Kluwer Academic Publishers.
• Anselin et al. (2006) Anselin, L., Syabri, I., and Kho, Y. (2006), “GeoDa: an introduction to spatial data analysis,” Geographical analysis, 38, 5–22.
• Banerjee et al. (2004) Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2004), Hierarchical Modeling and Analysis for Spatial Data, Boca Raton, FL, USA: Chapman and Hall/CRC.
• Besag (1974) Besag, J. (1974), “Spatial Interaction and the Statistical Analysis of Lattice Systems (with discussion),” Journal of the Royal Statistical Society, Series B, 36, 192–236.
• Besag (1986) — (1986), “On the statistical analysis of dirty pictures,” Journal of the Royal Statistical Society. Series B (Methodological), 259–302.
• Besag and Higdon (1999) Besag, J. and Higdon, D. (1999), “Bayesian analysis of agricultural field experiments,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61, 691–746.
• Besag and Kooperberg (1995) Besag, J. and Kooperberg, C. (1995), “On conditional and intrinsic autoregressions,” Biometrika, 82, 733–746.
• Bivand et al. (2013) Bivand, R., Hauke, J., and Kossowski, T. (2013), “Computing the Jacobian in Gaussian spatial autoregressive models: An illustrated comparison of available methods,” Geographical Analysis, 45, 150–179.
• Bivand and Piras (2015) Bivand, R. and Piras, G. (2015), “Comparing Implementations of Estimation Methods for Spatial Econometrics,” Journal of Statistical Software, 63, 1–36.
• Brook (1964)

Brook, D. (1964), “On the distinction between the conditional probability and the joint probability approaches in the specification of nearest-neighbour systems,”

Biometrika, 51, 481–483.
• Clayton and Kaldor (1987) Clayton, D. and Kaldor, J. (1987), “Empirical Bayes estimates of age-standardized relative risks for use in disease mapping,” Biometrics, 43, 671–681.
• Clifford (1990) Clifford, P. (1990), “Markov random fields in statistics,” in Disorder in Physical Systems: A Volume in Honour of John M. Hammersley, eds. Grimmett, R. G. and Welsh, D. J. A., New York, NY, USA: Oxford University Press, pp. 19–32.
• Cressie (1993) Cressie, N. A. C. (1993), Statistics for Spatial Data, Revised Edition, New York: John Wiley & Sons.
• Cullis and Gleeson (1991) Cullis, B. and Gleeson, A. (1991), “Spatial analysis of field experiments-an extension to two dimensions,” Biometrics, 1449–1460.
• Dormann et al. (2007) Dormann, C. F., McPherson, J. M., Araújo, M. B., Bivand, R., Bolliger, J., Carl, G., Davies, R. G., Hirzel, A., Jetz, W., Kissling, W. D., Kühn, I., Ohlemüller, R., Peres-Neto, P. R., Reineking, B., Schröder, B., Schurr, F. M., and Wilson, R. (2007), “Methods to account for spatial autocorrelation in the analysis of species distributional data: a review,” Ecography, 30, 609–628.
• Golub and Van Loan (2013) Golub, G. H. and Van Loan, C. F. (2013), Matrix Computations, Fourth Edition, Baltimore: John Hopkins University Press.
• Haining (1990) Haining, R. (1990), Spatial Data Analysis in the Social and Environmental Sciences, Cambridge, UK: Cambridge University Press.
• Hammersley and Clifford (1971) Hammersley, J. M. and Clifford, P. (1971), “Markov fields on finite graphs and lattices,” Unpublished Manuscript.
• Harville (1997) Harville, D. A. (1997), Matrix Algebra from a Statistician’s Perspective, New York, NY: Springer.
• Hoyer (2004) Hoyer, P. O. (2004), “Non-negative matrix factorization with sparseness constraints,”

Journal of Machine Learning Research

, 5, 1457–1469.
• Kent (2011) Kent, J. T. (2011), “Discussion on the paper by Lindgren, Rue and Lindström,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73, 423–498.
• Kirkpatrick et al. (1983) Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P., et al. (1983), “Optimization by simulated annealing,” Science, 220, 671–680.
• Kissling and Carl (2008) Kissling, W. D. and Carl, G. (2008), “Spatial autocorrelation and the selection of simultaneous autoregressive models,” Global Ecology and Biogeography, 17, 59–71.
• Lawson (2013) Lawson, A. B. (2013), Statistical Methods in Spatial Epidemiology, Chichester, UK: John Wiley & Sons.
• LeSage and Pace (2009) LeSage, J. and Pace, R. K. (2009), Introduction to Spatial Econometrics, Boca Raton, FL, USA: Chapman and Hall/CRC.
• Li et al. (2007) Li, H., Calder, C. A., and Cressie, N. (2007), “Beyond Moran’s I: testing for spatial dependence based on the spatial autoregressive model,” Geographical Analysis, 39, 357–375.
• Li (2009) Li, S. Z. (2009), Markov random field modeling in image analysis, Springer Science & Business Media.
• Lichstein et al. (2002) Lichstein, J. W., Simons, T. R., Shriner, S. A., and Franzreb, K. E. (2002), “Spatial autocorrelation and autoregressive models in ecology,” Ecological Monographs, 72, 445–463.
• Lindgren et al. (2011)

Lindgren, F., Rue, H., and Lindström, J. (2011), “An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach,”

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73, 423–498.
• Mardia (1990) Mardia, K. V. (1990), “Maximum likelihood estimation for spatial models,” in Spatial Statistics: Past, Present and Future, ed. Griffith, D., Michigan Document Services, Ann Arbor, MI, USA: Institute of Mathematical Geography, Monograph Series, Monograph #12, pp. 203–253.
• Martin (1987) Martin, R. (1987), “Some comments on correction techniques for boundary effects and missing value techniques,” Geographical Analysis, 19, 273–282.
• Patterson and Thompson (1974) Patterson, H. and Thompson, R. (1974), “Maximum likelihood estimation of components of variance,” in Proceedings of the 8th International Biometric Conference, Biometric Society, Washington, DC, pp. 197–207.
• Patterson and Thompson (1971) Patterson, H. D. and Thompson, R. (1971), “Recovery of inter-block information when block sizes are unequal,” Biometrika, 58, 545–554.
• R Core Team (2016) R Core Team (2016), R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria.
• Rue and Held (2005) Rue, H. and Held, L. (2005), Gauss Markov Random Fields: Theory and Applications, Boca Raton, FL, USA: Chapman and Hall/CRC.
• Rue et al. (2009)

Rue, H., Martino, S., and Chopin, N. (2009), “Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations,”

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71, 319–392.
• Schabenberger and Gotway (2005) Schabenberger, O. and Gotway, C. A. (2005), Statistical Methods for Spatial Data Analysis, Boca Raton, Florida: Chapman Hall/CRC.
• Wall (2004) Wall, M. M. (2004), “A close look at the spatial structure implied by the CAR and SAR models,” Journal of Statistical Planning and Inference, 121, 311–324.
• Waller and Gotway (2004) Waller, L. A. and Gotway, C. A. (2004), Applied Spatial Statistics for Public Health Data, John Wiley and Sons, New Jersey.
• Whittle (1954) Whittle, P. (1954), “On stationary processes in the plane,” Biometrika, 41, 434–449.

## APPENDIX: Propositions on Weights Matrices

The following proposition is used to show condition 2.2 for CAR models.

###### Proposition 3.

Let , where , , and are square matrices, is symmetric, exists, and is symmetric and positive definite. Then is positive definite if and only if all of the eigenvalues of are positive real numbers.

###### Proof.

(): Let be the matrix such that , and let be the matrix such that . Now, . Then has the same eigenvalues as because they are similar matrices (Harville, 1997, p. 525). If is positive definite, then is positive definite, so all eigenvalues of are positive real numbers.

(): Let , where the columns of contain orthonormal eigenvectors and is a diagonal matrix of eigenvalues that are all positive and real. Because of symmetry, , so . This shows that both and have columns that contain the eigenvectors for , so each column in has a corresponding column in that is a scalar multiple. Let be a diagonal matrix of those scalar multiples, so that . Hence, , and notice that all diagonal elements of will be positive because is positive definite. Also , so . Because and are diagonal, each with all positive real values, is positive definite. ∎

Condition 2.2 is satisfied by letting in (4) be in Proposition 3, by letting in (3) be in Proposition 3 (note that if has all positive eigenvalues, so too does ), and by letting in (4) be in Proposition 3.

Next, we show the conditions on that ensure that has either nonzero eigenvalues, or positive eigenvalues.

###### Proposition 4.

Consider the square matrix , where . Let be the set of eigenvalues of , and suppose all eigenvalues are real. Then

1. if for all nonzero , then is nonsingular, and

2. assume at least two eigenvalues of are not zero, and let be the smallest eigenvalue of , and be the largest eigenvalue of . If , then has only positive eigenvalues.

###### Proof.

Let be an eigenvalue of , so the following holds,

 (I−ρW)x=ωx. (A.1)

Let be the eigenvector corresponding to . Solving for in (A.1), let . Then,

 vi−ρWvi=ωivi,⟹vi−ρλivi=ωivi,⟹(1−ρλi)vi=ωivi,⟹(1−ρλi)=ωi.

Then,

1. is nonsingular if all , so if , then for all , otherwise for all nonzero .

2. For all , for all . If , then , and if , then . For all negative , only will ensure all , and for positive , only will ensure all . Hence will guarantee that all eigenvalues of are positive. ∎