A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees

07/20/2013 ∙ by Kshitij Khare, et al. ∙ 0

Sparse high dimensional graphical model selection is a topic of much interest in modern day statistics. A popular approach is to apply l1-penalties to either (1) parametric likelihoods, or, (2) regularized regression/pseudo-likelihoods, with the latter having the distinct advantage that they do not explicitly assume Gaussianity. As none of the popular methods proposed for solving pseudo-likelihood based objective functions have provable convergence guarantees, it is not clear if corresponding estimators exist or are even computable, or if they actually yield correct partial correlation graphs. This paper proposes a new pseudo-likelihood based graphical model selection method that aims to overcome some of the shortcomings of current methods, but at the same time retain all their respective strengths. In particular, we introduce a novel framework that leads to a convex formulation of the partial covariance regression graph problem, resulting in an objective function comprised of quadratic forms. The objective is then optimized via a coordinate-wise approach. The specific functional form of the objective function facilitates rigorous convergence analysis leading to convergence guarantees; an important property that cannot be established using standard results, when the dimension is larger than the sample size, as is often the case in high dimensional applications. These convergence guarantees ensure that estimators are well-defined under very general conditions, and are always computable. In addition, the approach yields estimators that have good large sample properties and also respect symmetry. Furthermore, application to simulated/real data, timing comparisons and numerical convergence is demonstrated. We also present a novel unifying framework that places all graphical pseudo-likelihood methods as special cases of a more general formulation, leading to important insights.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the hallmarks of modern day statistics is the advent of high-dimensional datasets arising particularly from applications in the biological sciences, environmental sciences and finance. A central quantity of interest in such applications is the covariance matrix

of high dimensional random vectors. It is well known that the sample covariance matrix

can be a poor estimator of , especially when is large, where is the sample size and is the number of variables in the dataset. Hence is not a useful estimator for for high-dimensional datasets, where often either (“large , small ”) or when is comparable to and both are large (“large , large ”). The basic problem here is that the number of parameters in is of the order . Hence in the settings mentioned above, the sample size is often not large enough to obtain a good estimator.

For many real life applications, the quantity of interest is the inverse covariance/partial covariance matrix . In such situations, it is often reasonable to assume that there are only a few significant partial correlations and the other partial correlations are negligible in comparison. In mathematical terms, this amounts to making the assumption that the inverse covariance matrix is sparse, i.e., many entries in are zero. Note that is equivalent to saying that the partial correlation between the and variables is zero (under Gaussianity, this reduces to the statement that the and variables are conditionally independent given the other variables). The zeros in can be conveniently represented by partial correlation graphs. The assumption of a sparse graph is often deemed very reasonable in applications. For example, as Peng et al. (2009) point out, among 26 examples of published networks compiled by Newman (2003), 24 networks had edge density less than 4%.

A number of methods have been proposed for identifying sparse partial correlation graphs in the penalized likelihood and penalized regression based framework (Meinshausen and Bühlmann, 2006, Friedman et al., 2008, Peng et al., 2009, Friedman et al., 2010). The main focus here is estimation of the sparsity pattern. Many of these methods do not necessarily yield positive definite estimates of . However, once a sparsity pattern is established, a positive definite estimate can be easily obtained using efficient methods (see Hastie et al. (2009), Speed and Kiiveri (1986)).

The penalized likelihood approach induces sparsity by minimizing the (negative) log-likelihood function with an penalty on the elements of . In the Gaussian setup, this approach was pursued by Banerjee et al. (2008) and others. Friedman et al. (2008) proposed the graphical lasso (“Glasso”) algorithm for the above minimization problem, and is substantially faster than earlier methods. In recent years, many interesting and useful methods have been proposed for speeding up the performance of the graphical lasso algorithm (see Mazumder and Hastie (2012) for instance). It is worth noting that for these methods to provide substantial improvements over the graphical lasso, certain assumptions are required on the number and size of the connected components of the graph implied by the zeros in (the minimizer).

Another useful approach introduced by Meinshausen and Bühlmann (2006) estimates the zeros in

by fitting separate lasso regressions for each variable given the other variables. These individual lasso fits give neighborhoods that link each variable to others.

Peng et al. (2009) improve this neighborhood selection (NS) method by taking the natural symmetry in the problem into account (i.e., ), as not doing so could result in less efficiency and contradictory neighborhoods.

In particular, the SPACE (Sparse PArtial Correlation Estimation) method was proposed by Peng et al. (2009) as an effective alternative to existing methods for sparse estimation of

. The SPACE procedure iterates between (1) updating partial correlations by a joint lasso regression and (2) separately updating the partial variances. As indicated above, it also accounts for the symmetry in

and is computationally efficient. Peng et al. (2009) show that under suitable regularity conditions, SPACE yields consistent estimators in high dimensional settings. All the above properties make SPACE an attractive regression based approach for estimating sparse partial correlation graphs. In the examples presented in Peng et al. (2009), the authors find that empirically the SPACE algorithm seems to converge really fast. It is however not clear if SPACE will converge in general. Convergence is of course critical so that the corresponding estimator is always guaranteed to exist and is therefore meaningful, both computationally and statistically. In fact, as we illustrate in Section 2, the SPACE algorithm might fail to converge in simple cases, for both the standard choices of weights suggested in Peng et al. (2009). Motivated by SPACE, Friedman et al. (2010) present a coordinate-wise descent approach (the “Symmetric lasso”), which may be considered as a symmetrized version of the approach in Meinshausen and Bühlmann (2006). As we show in Section 2.3, it is also not clear if the Symmetric lasso will converge.

In this paper, we present a new method called the CONvex CORrelation selection methoD (CONCORD) algorithm for sparse estimation of . The algorithm obtains estimates of by minimizing an objective function, which is jointly convex, but more importantly comprised of quadratic forms in the entries of . The subsequent minimization is performed via coordinate-wise descent. The convexity is strict if , in which case standard results guarantee the convergence of the coordinate-wise descent algorithm to the unique global minimum. If , the objective function may not be strictly convex. As a result, a unique global minimum may not exist, and existing theory does not guarantee convergence of the sequence of iterates of the coordinate-wise descent algorithm to a global minimum. In Section 4, by exploiting the quadratic forms present in the objective, it is rigorously demonstrated that the sequence of iterates does indeed converge to a global minimum of the objective function regardless of the dimension of the problem. Furthermore, it is shown in Section 6 that the CONCORD estimators are asymptotically consistent in high dimensional settings under regularity assumptions identical to Peng et al. (2009). Hence, our method preserves all the attractive properties of SPACE, while also providing a theoretical guarantee of convergence to a global minimum. In the process CONCORD yields an estimator that is well-defined and is always computable. The strengths of CONCORD are further illustrated in the simulations and real data analysis presented in Section 5. A comparison of the relevant properties of different estimators proposed in the literature is provided in Table 1 (Neighborhood selection (NS) by Meinshausen and Bühlmann (2006), SPACE by Peng et al. (2009), Symmetric lasso (SYMLASSO) by Friedman et al. (2010), SPLICE by Rocha et al. (2008) and CONCORD). The table shows that the CONCORD algorithm preserves all the attractive properties of existing algorithms, while also providing rigorous convergence guarantees. Another major contribution of the paper is the development of a unifying framework that renders the different pseudo-likelihood based graphical model selection procedures as special cases. This general formulation facilitates a direct comparison between the above pseudo-likelihood based methods and gives deep insights into their respective strengths and weaknesses.

METHOD
Property

NS

SPACE

SYMLASSO

SPLICE

CONCORD

Symmetry + + + +
Convergence guarantee (fixed ) N/A +
Asymptotic consistency () + + +
Table 1: Comparison of regression based graphical model selection methods. A “+” indicates that a specified method has the given property. A blank space indicates the absence of a property. “N/A” stands for not applicable.

The remainder of the paper is organized as follows. Section 2 briefly describes the SPACE algorithm and presents examples where it fails to converge. This section motivates our work and also analyzes other regression-based or pseudo-likelihood methods that have been proposed. Section 3 introduces the CONCORD method and presents a general framework that unifies recently proposed pseudo-likelihood methods. Section 4 establishes convergence of CONCORD to a global minimum, even if . Section 5 illustrates the performance of the CONCORD procedure on simulated and real data. Comparisons to SPACE and Glasso are provided. When applied to gene expression data, the results given by CONCORD are validated in a significant way by a recent extensive breast cancer study. Section 6 establishes large sample properties of the CONCORD approach. Concluding remarks are given in Section 7. The supplemental document contains proofs of some of the results in the paper.

2 The SPACE algorithm and convergence properties

Let the random vector , denote i.i.d. observations from a multivariate distribution with mean vector and covariance matrix . Let denote the inverse covariance matrix, and let where denotes the partial correlation between the and variable for . Note that for . Denote the sample covariance matrix by , and the sample corresponding to the variable by .

2.1 The SPACE algorithm

Peng et al. (2009) propose the following novel iterative algorithm to estimate the partial correlations and the partial covariances corresponding to (see Algorithm 1).

2.2 Convergence Properties of SPACE

From empirical studies, Peng et al. (2009) find that the SPACE algorithm converges quickly. As mentioned in the introduction, it is not immediately clear if convergence can be established theoretically. In an effort to understand such properties, we now place the SPACE algorithm in a useful optimization framework.

Lemma 1.

For the choice of weights, , the SPACE algorithm corresponds to an iterative partial minimization procedure (IPM) for the following objective function:

(1)

Input: Standardize data to have mean zero and standard deviation one

Input: Fix maximum number of iterations:
Input: Fix initial estimate: ( as suggested)
Input: Choose weights111Peng et al. (2009) suggest two natural choices of weights : (1) uniform weights for all (ii) partial variance weights .: ( or )
Set
repeat
     ## Update partial correlations
     Update by minimizing (with current estimates as fixed)
(2)
     ## Update conditional variances
     Update by computing (with fixed and for all and )
(3)
     for .
     
     Update weights:
until 
Return
Algorithm 1 (SPACE pseudocode)

Proof : Note that when fixing the diagonals , the minimization in (2) in the SPACE algorithm (with weights ), corresponds to minimizing with respect to . Now, let be the minimizer of with respect to , fixing (where ). Then, it follows that

(4)

The result follows by comparing (4) with the updates in (3).

Although Lemma 1 identifies SPACE as an IPM, existing theory for iterative partial minimization (see for example Zangwill (1969), Jensen et al. (1991), Lauritzen (1996), etc) only guarantees that every accumulation point of the sequence of iterates is a stationary point of the objective function . To establish convergence, one needs to prove that every contour of the function contains only finitely many stationary points. It is not clear if this latter condition holds for the function . Moreover, for choice of weights , the SPACE algorithm does not appear to have an iterative partial minimization interpretation.

To improve our understanding of the convergence properties of SPACE, we started by testing the algorithm on simple examples. On some examples, SPACE converges very quickly; however, examples can be found where SPACE does not converge when using the two possible choices for weights: partial variance weights () and uniform weights (). We now give an example of the lack of convergence.

Example 1: Consider the following population covariance and inverse covariance matrices:

(5)

A sample of i.i.d. vectors was generated from the corresponding distribution. The data was standardized and the SPACE algorithm was run with choice of weights and . After the first few iterations successive SPACE iterates alternate between the following two matrices:

(6)

thereby establishing non-convergence of the SPACE algorithm in this example (see also Figure 1(a)). Note that the two matrices in (6) have different sparsity patterns. A similar example of non-convergence of SPACE with uniform weights is provided in Supplemental Section N.

A natural question to ask is whether the non-convergence of SPACE is pathological or whether is it widespread in settings of interest. To this end, the following simulation study was undertaken.

Example 2: We created a sparse matrix with edge density 4% and a condition number of 100. A total of 100 multivariate Gaussian datasets (with ) having mean vector zero and covariance matrix were generated. Table 2 summarizes the number of times (out of 100) SPACE1 (SPACE with uniform weights) and SPACE2 (SPACE with partial variance weights) do not converge within 1500 iterations. When they do converge, the mean number of iterations are 22.3 for SPACE1 and 14.1 for SPACE2 (note that since the original implementation of SPACE by Peng et al. (2009) was programmed to stop after iterations, we modified the implementation to allow for more iterations in order to check for convergence of parameter estimates). It is clear from Table 2 that both variations of SPACE, using unit weights as well as weights, exhibit extensive non-convergence behavior. Our simulations suggest that the convergence problem is exacerbated as the condition number of increases.

SPACE1 () SPACE2 ()
NZ NC NZ NC
0.026 60.9% 92 0.085 79.8% 100
0.099 19.7% 100 0.160 28.3% 0
0.163 7.6% 100 0.220 10.7% 0
0.228 2.9% 100 0.280 4.8% 0
0.614 0.4% 0 0.730 0.5% 97
Table 2: Number of simulations (out of 100) that do not converge within 1500 iterations (NC) for select values of penalty parameter (). Average percentage of non-zeros (NZ) in are also shown.

2.3 Symmetric lasso

The Symmetric lasso algorithm was proposed as a useful alternative to SPACE in a recent work by Friedman et al. (2010). Symmetric lasso minimizes the following (negative) pseudo-likelihood:

(7)

where . Here denotes the vector with entries for and denotes the matrix with diagonal entries set to zero. A comparison of (1) and (7) shows a deep connection between SPACE (with ) and Symmetric lasso objective functions. In particular, the objective function in (7) is a reparameterization of (1): the only difference is that the penalty on the elements of is replaced by a penalty on the elements of in (7). The minimization of the objective function in (7) is performed by coordinate-wise descent on . Symmetric lasso is indeed a useful and computationally efficient procedure. However, theoretical properties such as convergence or asymptotic consistency have not yet been established. The following lemma investigates the properties of the objective function used in Symmetric lasso.

Lemma 2.

The Symmetric Lasso objective function in (7) is a non-convex function of .

The proof of Lemma 2 is given in Supplemental Section A. The arguments in the proof of Lemma 2 demonstrate that the objective function used in Symmetric lasso is not convex, or even bi-convex in the parameterization used above. However, it can be shown that the SYMLASSO objective function is jointly convex in the elements of (see Lee and Hastie (2014) and Supplemental section L). It is straightforward to check that the coordinatewise descent algorithms for both parameterizations are exactly the same. However, unless a function is strictly convex, there are no general theoretical guarantees of convergence for the corresponding coordinatewise descent algorithm. Indeed, when , the SYMLASSO objective function is not strictly convex. Therefore, it is not clear if the coordinate descent algorithm converges in general. We conclude this section by remarking that both SPACE and symmetric lasso are useful additions to the graphical model selection literature, especially because they both respect symmetry and give computationally fast procedures.

2.4 The SPLICE algorithm

The SPLICE algorithm (Sparse Pseudo-Likelihood Inverse Covariance Estimates) was proposed by Rocha et al. (2008) as an alternative means to estimate . In particular, the SPLICE formulation uses an -penalized regression based pseudo-likelihood objective function parameterized by matrices and where . The diagonal matrix has elements , . The (asymmetric) matrix has as columns the vectors of regression coefficients, . These coefficients, , arise when regressing on the remaining variables. A constraint on each is imposed so that regression of

is performed without including itself as a predictor variable: i.e.,

. Based on the above properties, the -penalized pseudo-likelihood objective function of SPLICE algorithm (without the constant term) is given by

(8)

In order to optimize (8) with respect to and , Rocha et al. (2008) also propose an iterative algorithm that alternates between maximizing fixing , followed by maximizing fixing . As with other regression-based graphical model selection algorithms, a proof of convergence of SPLICE is not available. The following lemma gives the convexity properties of the SPLICE objective function.

Lemma 3.

i) The SPLICE objective function is not jointly convex in .

ii) Under the transformation , is bi-convex.

The proof of Lemma 3 is given in Supplemental Section B. The convergence properties of the SPLICE algorithm is not immediately clear since its objective function is non-convex. Furthermore, it is not clear whether the SPLICE solution yields a global optimum.

3 CONCORD: A convex pseudo-likelihood framework for sparse partial covariance estimation

The two pseudo-likelihood based approaches, SPACE and Symmetric lasso, have several attractive properties such as computational efficiency, simplicity and use of symmetry. They also do not directly depend on the more restrictive Gaussian assumption. Additionally, Peng et al. (2009) also establish (under suitable regularity assumptions) consistency of SPACE estimators for distributions with sub-Gaussian tails. However, none of the existing pseudo-likelihood based approaches yield a method that is provably convergent. In Section 2.2, we showed that there are instances where SPACE does not converge. As explained earlier, convergence is critical as this property guarantees well defined estimators which always exist, and are computable regardless of the data at hand. An important research objective therefore is the development of a pseudo-likelihood framework which preserves all the attractive properties of SPACE and SYMLASSO, and at the same time, leads to theoretical guarantees of convergence. It is however not clear immediately how to achieve this goal. A natural approach to take is to develop a convex formulation of the problem. Such an approach can yield many advantages, including 1) Guarantee of existence of a global minimum, 2) Better chance of convergence using convex optimization algorithms, 3) Deeper theoretical analysis of the properties of the solution and corresponding algorithm. As we have shown, the SPACE objective function is not jointly convex in the elements of (or any natural reparameterization). Hence, one is not in a position to leverage tools from convex optimization theory for understanding its behavior. The SYMLASSO objective function is jointly convex in the elements of . However, unless a function is strictly convex, there are no general guarantees of convergence for the corresponding coordinatewise descent algorithm. Indeed, when , the SYMLASSO objective function is not strictly convex, and it is not clear if the corresponding coordinatewise descent algorithm converges.

In this section, we introduce a new approach for estimating , called the CONvex CORrelation selection methoD (CONCORD) that aims to achieve the above objective. The CONCORD algorithm constructs sparse estimators of by minimizing an objective function that is jointly convex in the entries of . We start by introducing the objective function for the CONCORD method and then proceed to derive the details of the corresponding coordinate-wise descent updates. Convergence is not obvious, as the function may not be strictly convex if . It is proved in Section 4 that the corresponding coordinate-wise descent algorithm does indeed converge to a global minimum. Computational complexity and running time comparisons for CONCORD are given in Sections 3.3 and 5.1, respectively. Subsequently, large sample properties of the resulting estimator are established in Section 6 in order to provide asymptotic guarantees in the regime when both the dimension and the sample size tend to infinity. Thereafter, the performance of CONCORD on simulated data, and real data from biomedical and financial applications is demonstrated. Such analysis serves to establish that CONCORD preserves all the attractive properties of existing pseudo-likelihood methods and additionally provides the crucial theoretical guarantee of convergence and existence of a well-defined solution.

3.1 The CONCORD objective function

In order to develop a convex formulation of the pseudo-likelihood graphical model selection problem let us first revisit the formulation of the SPACE objective function in (1) with arbitrary weights instead of .

(9)

Now note that the above objective is not jointly convex in the elements of since, 1) The middle term for the regression with the choices or is not a jointly convex function of the elements of . 2) The penalty term is on the partial correlations and is hence not a jointly convex function of the elements of .

Now note the following for the regression term:

The choice of weights yields

(10)

The above expression in (10) is a quadratic form (and hence jointly convex) in the elements of . Putting the -penalty term on the partial covariances instead of on the partial correlations yields the following jointly convex objective function:

(11)

The function can be regarded as a pseudo-likelihood function in the spirit of Besag (1975). Since and are convex functions, and is a positive semi-definite quadratic form in , it follows that is a jointly convex function of (but not necessarily strictly convex). As we shall see later, this particular formulation above helps us establish theoretical guarantees of convergence (see Section 4), and, consequently, yields a regression based graphical model estimator that is well defined and is always computable. Note that the in (9) has been replaced by in (11). The point is elaborated further in Remark 4. We now proceed to derive the details of the coordinate-wise descent algorithm for minimizing .

3.2 A coordinatewise minimization algorithm for minimizing

Let denote the set of real symmetric matrices. Let the parameter space be defined as

Note that as in other regression based approaches (see Peng et al. (2009)), we have deliberately not restricted to be positive definite as the main goal is to estimate the sparsity pattern in . As mentioned in the introduction, a positive definite estimator can be obtained by using standard methods (Hastie et al. (2009), Xu et al. (2011)) once a partial correlation graph has been determined.

Let us now proceed to optimizing . For , define the function by

(12)

For each , gives the matrix where all the elements of are left as is except the element. The element is replaced by the value that minimizes with respect to holding all other variables constant. We now proceed to evaluate explicitly.

Lemma 4.

The function defined in (12) can be computed in closed form. In particular, for ,

(13)

For ,

(14)

where is the entry of , and .

The proof is given in Supplemental Section C. An important contribution of Lemma 4 is that it gives the necessary ingredients for designing a coordinate descent approach to minimizing the CONCORD objective function. More specifically, (13) can be used to update the partial variance terms, and (14) can be used to update the partial covariance terms. The coordinate-wise descent algorithm for CONCORD is summarized in Algorithm 2. The zeros in the estimated partial covariance matrix can then subsequently be used to construct a partial covariance or partial correlation graph.

Input: standardize data to have mean zero and standard deviation one
Input: Fix maximum number of iterations:
Input: Fix initial estimate:
Input: Fix convergence threshold:
Set
converged = FALSE
Set
repeat
     
     ## Updates to partial covariances
     for  do
         for  do
(15)
         end for
     end for
     ## Updates to partial variances
     for  do
(16)
     end for
     
     ## Convergence checking
     if  then
         converged = TRUE
     else
         
     end if
until converged = TRUE or
Return final estimate:
Algorithm 2 (CONCORD pseudocode)

The following procedure can be used to select the penalty parameter . Define the residual sum of squares (RSS) for as

Further, the -th component of BIC type score can be defined as

The penalty parameter can be chosen to minimize the sum .

3.3 Computational complexity

We now proceed to show that the computational cost of each iteration of CONCORD is , that is, the CONCORD algorithm is competitive with other proposed methods. The updates in Equations in (15) and (16) are implemented differently depending on whether or .

Case 1 (): Let us first consider the case when . Note that both sums in (14) are inner products between a row in and a row in . Clearly, computing these sums require operations each. Similarly, the update in (13) requires operations. Since there are entries in , one complete sweep of updates over all entries in would require operations.

Case 2 (): Let us now consider the case when . We show below that the updates can be performed in operations. The main idea here is that the coordinate-wise calculations at each iteration, which involves an inner product of two vectors, can be reduced to an inner product calculation involving auxiliary variables (residual variables to be more specific) of dimension . The following lemmas are essential ingredients in calculating the computational complexity in this setting. In particular, Lemma 5 expresses the inner product calculations in (13) and (14) in terms of residual vectors.

Lemma 5.

For ,

where is the column of the data matrix , and is an -vector of residuals of regressing on the rest.

The following lemma now quantifies the computational cost of updating the residual vectors during each iteration of the CONCORD algorithm.

Lemma 6.

Define the residual vector for as follows:

(17)

where . Then,

  1. For , the residual vector is functionally independent of . (The term appears only in the expressions for the residual vectors and .)

  2. Fix all the elements of except . Suppose is changed to . Then, updating the residual vectors and requires operations. (Hence, updating and after each update in (15) requires operations.)

  3. For , the residual vector is functionally independent of . (The term appears only in the expression for the residual vector .)

  4. Fix all elements of except . Suppose is changed to . Then, updating the residual vector requires operations. (Hence, updating after each update in (16) requires operations).

The proofs of Lemmas 5 and 6 are straightforward and are given in Supplemental Sections D and E. Note that the inner product between and takes operations. Hence, by Lemma 5 the updates in (15) and (16) require operations. Also, after each update in (15) and (16) the residual vectors need to be appropriately modified. By Lemma 6, this modification can also be achieved in operations. As a result, one complete sweep of updates over all entries in can be performed in operations.

Hence, we conclude that the computational complexity of the CONCORD algorithm is competitive with the SPACE and Symmetric lasso algorithms, which are also .

3.4 A unifying framework for pseudo-likelihood based graphical model selection

In this section, we provide a unifying framework which formally connects the five pseudo-likelihood formulations considered in this paper, namely, SPACE1, SPACE2, SYMLASSO, SPLICE and CONCORD (counting two choices for weights in the SPACE algorithm as two different formulations). Recall that the random vectors , denote i.i.d. observations from a multivariate distribution with mean vector and covariance matrix , the precision matrix is given by , and denotes the sample covariance matrix. Let denote the diagonal matrix with diagonal entry given by . Lemma 7 below formally identifies the relationship between all five of the regression-based pseudo-likelihood methods.

Lemma 7.

i) The (negative) pseudo-likelihood functions of CONCORD, SPACE1, SPACE2, SYMLASSO and SPLICE formulations can be expressed in matrix form as follows (up to reparameterization):

Regression form Matrix form

ii) All five pseudo-likelihoods above correspond to a unified or generalized form of the Gaussian log-likelihood function

where and are functions of . The functions and which characterize the pseudo-likelihood formulations corresponding to CONCORD, SPACE1, SPACE2, SYMLASSO and SPLICE are given as follows:

The proof of Lemma 7 is given in Supplemental Section F. The above lemma gives various useful insights into the different pseudo-likelihoods that have been proposed for the inverse covariance estimation problem. The following remarks discuss these insights.

Remark 1.

Note that when , corresponds to the standard (negative) Gaussian log-likelihood function.

Remark 2.

Note that is a re-scaling of so as to make all the diagonal elements one (hence sparsity between and are the same). In this sense, the SPACE2, SYMLASSO and SPLICE algorithms make the same approximation to the Gaussian likelihood with the log determinant term, , replaced by . The trace term is approximated by . Moreover, if is sparse, then

is close to the identity matrix, i.e.,

for some . In this case, the term in the Gaussian likelihood is perturbed by an off-diagonal matrix resulting in an expression of the form .

Remark 3.

Conceptually, the sole source of difference between the three regularized versions of the objective functions of SPACE2, SYMLASSO and SPLICE algorithms is in the way in which the -penalties are specified. SPACE2 applies the penalty to the partial correlations, SYMLASSO to the partial covariances and SPLICE to the symmetrized regression coefficients.

Remark 4.

Note that the CONCORD method approximates the Normal likelihood by approximating the term by , and by . Hence, the CONCORD algorithm can be considered as a reparameterization of the Gaussian likelihood with the concentration matrix (together with an approximation to the log determinant term). More specifically,

and justifies the appearance of “” as compared to “” in the CONCORD objective in (11). In Supplemental Section G, we illustrate the usefulness of this correction based on the insight from our unification framework, and show that it leads to better estimates of .

4 Convergence of CONCORD

We now proceed to consider the convergence properties of the CONCORD algorithm. Note that is not differentiable. Also, if , then is not necessarily strictly convex. Hence, the global minimum may not be unique, and as discussed below, the convergence of the coordinatewise minimization algorithm to a global minimum does not follow from existing theory. Note that although is not differentiable, it can be expressed as a sum of a smooth function of and a separable function of (namely ). Tseng (1988, 2001) proves that under certain conditions, every cluster point of the sequence of iterates of the coordinatewise minimization algorithm for such an objective function is a stationary point of the objective function. However, if the function is not strictly convex, there is no general guarantee that the sequence of iterates has a unique cluster point, i.e., there is no theoretical guarantee that the sequence of iterates converges. The following theorem shows that the cyclic coordinatewise minimization algorithm applied to the CONCORD objective function converges to a global minimum. A proof of this result can be found in Supplemental Section H.

Theorem 1.

If for every , the sequence of iterates obtained by Algorithm 2 converges to a global minimum of . More specifically, as for some , and furthermore for all .

Remark 5.

If , and none of the underlying marginal distributions (corresponding to the -variate distribution for the data vectors) is degenerate, it follows that the diagonal entries of the data covariance matrix

are strictly positive with probability

.

With theory in hand, we now proceed to numerically illustrate the convergence properties established above. When CONCORD is applied to the dataset in Example 1, convergence is achieved (see Figure 1(b)), whereas SPACE does not converge (see Figure 1(a)).

(a) SPACE algorithm (partial variance weights) applied to dataset in Example 1.
(b) CONCORD algorithm applied to dataset in Example 1.
Figure 1: Illustrations of the non-convergence of SPACE and convergence of CONCORD. The y-axes are log scaled. For SPACE, log absolute difference between entries of successive estimates becomes constant (thus indicating non-convergence).

5 Applications

5.1 Simulated Data

5.1.1 Timing Comparison

We now proceed to compare the timing performance of CONCORD with Glasso and the two different versions of SPACE. The acronyms SPACE1 and SPACE2 denote SPACE estimates using uniform weights and partial variance weights, respectively. We first consider the setting . For the purposes of this simulation study, a positive definite matrix (with ) with condition number 10 was used. Thereafter, 50 independent datasets were generated, each consisting of i.i.d. samples from a distribution. For each dataset, the four algorithms were run until convergence for a range of penalty parameter values. We note that the default number of iterations for SPACE in the R function by Peng et al. (2009) is . However, given the convergence issues for SPACE, we ran SPACE until convergence or until 50 iterations (whichever is smaller). The timing results (averaged over the 100 datasets) in the top part of Table 3 below show wall clock times until convergence (in seconds) for Glasso, CONCORD, SPACE1 and SPACE2.

One can see that in the setting, CONCORD is uniformly faster than its competitors. Note the low penalty parameter cases correspond to high dimensional settings where the estimated covariance matrix is typically poorly conditioned and the log-likelihood surface is very flat. The results in Table 3 indicate that in such settings CONCORD is faster than its competitors by orders of magnitude (even though Glasso is implemented in Fortran). Both SPACE1 and SPACE2 are much slower than CONCORD and Glasso in this setting. The wall clock time for an iterative algorithm can be thought of as a function of the number of iterations until convergence, the order of computations for a single iteration, and also the implementation details (such as choice of software, efficiency of the code etc.). Note that the order of computations for a single iteration is same for SPACE and CONCORD, and lower than that of Glasso when . It is likely that the significant increase in the wall clock time for SPACE is due to implementation details and the larger number of iterations required for convergence (or non-convergence, since we are stopping SPACE if the algorithm does not satisfy the convergence criterion by 50 iterations).

We further compare the timing performance of CONCORD and Glasso for with and (SPACE is not considered here because of the timing issues mentioned above. These issues are amplified in this more demanding setting). A positive definite matrix (with ) with sparsity is used. Thereafter, 50 independent datasets were generated, each consisting of i.i.d. samples from a distribution. The same exercise was repeated with . The timing results (averaged over the 100 datasets) in the bottom part of Table 3 below show wall clock times until convergence (in seconds) for Glasso, CONCORD, SPACE1 and SPACE2 for various penalty parameter values. It can be seen that in both the and cases, CONCORD was around ten times faster than Glasso.

In conclusion, these simulation results in this subsection illustrate that CONCORD is much faster as compared to SPACE and Glasso, especially in very high dimensional settings. We also note that a downloadable version of the CONCORD algorithm has been developed in R, and is freely available at http://cran.r-project.org/web/packages/gconcord.

,
Glasso CONCORD SPACE1 () SPACE2 ()
NZ Time NZ Time NZ Time NZ Time
0.14 4.77% 87.60 0.12 4.23% 6.12 0.10 4.49% 101.78 0.16 100.00% 19206.55
0.19 0.87% 71.47 0.17 0.98% 5.10 0.17 0.64% 99.20 0.21 1.76% 222.00
0.28 0.17% 5.41 0.28 0.15% 5.37 0.28 0.14% 138.01 0.30 0.17% 94.59
0.39 0.08% 5.30 0.39 0.07% 4.00 0.39 0.07% 75.55 0.40 0.08% 108.61
0.51 0.04% 6.38 0.51 0.04% 4.76 0.51 0.04% 49.59 0.51 0.04% 132.34
, ,
Glasso CONCORD Glasso CONCORD
NZ Time NZ Time NZ Time NZ Time
0.09 2.71% 1842.74 0.09 2.10% 266.69 0.09 0.70% 1389.96 0.09 0.64% 298.21
0.10 1.97% 1835.32 0.10 1.59% 235.49 0.10 0.44% 1395.42 0.10 0.41% 298.00
0.10 1.43% 1419.41 0.10 1.19% 232.67 0.10 0.27% 1334.78 0.10 0.26% 302.15
Table 3: Timing comparison (in seconds) for and varying . SPACE is run until convergence or 50 iterations (whichever is smaller). Note that SPACE1 and SPACE2 are much slower compared than CONCORD and Glasso in wall time, for the simulation. Hence, for , only Glasso and CONCORD are compared. Here, denotes the value of the penalty parameter for the respective algorithms, with for CONCORD and SPACE. denotes the percentage of non-zero entries in the corresponding estimator.

5.1.2 Model selection comparison

In this section, we perform a simulation study in which we compare the model selection performance of CONCORD and Glasso when the underlying data is drawn from a multivariate- distribution (the reasons for not considering SPACE are provided in a remark at the end of this section). The data is drawn from a multivariate- distribution to illustrate the potential benefit of using penalized regression methods (CONCORD) outside the Gaussian setting.

For the purposes of this study, using a similar approach as in Peng et al. (2009), a sparse positive definite matrix (with ) with condition number is chosen. Using this for each sample size , and , 50 datasets, each having i.i.d. multivariate- distribution with mean zero and covariance matrix , are generated. We compare the model selection performance of Glasso and CONCORD in this heavy tailed setting with receiver operating characteristic (ROC) curves, which compare false positive rates (FPR) and true positive rates (TPR). Each ROC curve is traced out by varying the penalty parameter over possible values.

We use the Area-under-the-curve (AUC) as a means to compare model selection performance. This measure is frequently used to compare ROC curves (Fawcett, 2006, Friedman et al., 2010). The AUC of a full ROC curve resulting from perfect recovery of zero/non-zero structure in would be 1. In typical real applications, FPR is controlled to be sufficiently low. We therefore compare model selection performance when FPR is less than 15% (or 0.15). When controlling FPR to be less than 0.15, a perfect method will yield AUC of 0.15. Table 4 provides the median of the AUCs (divided by 0.15 to normalize to 1), as well as the interquartile ranges (IQR) over the 50 datasets for , and .

Solver Median IQR Median IQR Median IQR
Glasso 0.745 0.032 0.819 0.030 0.885 0.029
CONCORD 0.811 0.011 0.887 0.012 0.933 0.013
Table 4: Median and IQR of area-under-the-curve (AUC) for 50 simulations. Each simulation yields a ROC curve from which the AUC is computed for FPR in the interval [0, 0.15] and normalized to 1.

Table 4 above shows that CONCORD has a much better model selection performance as compared to Glasso. Moreover, it turns out that CONCORD has a higher AUC than Glasso for every single one of the datasets ( each for and ). We note that CONCORD not only recovers the sparsity structure more accurately in general, it also has much less variation.

Remark: Note that we need to simulate datasets for each of the above three sample sizes. For each of these datasets, an algorithm has to be run for 50 different penalty parameter values. In totality, this amounts to running the algorithm 7500 times. As we demonstrated in the simulations in Section 5.1.1, when SPACE is run until convergence (or terminated after the number of iterations is 50), then SPACE’s intractability makes it infeasible to run it 7500 times. As an alternative, one could follow the approach of Peng et al. (2009) and stop SPACE after running iterations. However, given the possible non-convergence issues associated with SPACE, it is not clear if the resulting estimate is meaningful. Even so, if we follow this approach of stopping SPACE after three iterations, we find that CONCORD outperforms SPACE1 and SPACE2. For example, if we consider the