I Introduction
In recent years, there has been a great deal of interest in developing computationally efficient methods to analyze largescale and highdimensional data. The data collected in practice is often overwhelmingly large. Therefore, designing simple, yet informative models for describing the underlying structure of data is of significant importance. Hence, sparsitypromoting techniques have become an essential part of inference and learning methods. These techniques have been widelyused in data mining
[1][2], functional connectivity of the human brain [3], distributed controller design [4, 5], transportation systems [6], and compressive sensing [7]. On the other hand, in many applications, the number of available data samples is much smaller than the dimension of the data. This implies that most of the statistical learning techniques, which are proven to be consistent with the true structure of the data fail dramatically in practice. This is due to the fact that most of the convergence results in the inference methods are contingent upon the availability of a sufficient number of samples, which may not be the case in practice. In an effort to overcome this issue, sparsityinducing penalty functions are often used to arrive at a parsimonious graphical model for the available data. Graphical Lasso (GL) [8, 9] is one of the most widelyused methods for sparse estimation of the inverse covariance matrices via the augmentation of a Lassotype penalty function. It is known that the GL can be computationally prohibitive for the largescale problems, which limits its applicability in practice. Recently, it has been shown in various applications, such as brain connectivity networks, electrical circuits, and transportation networks, that the thresholding technique and the GL lead to the same sparsity structure [10, 6]. Moreover, [10] shows that under some conditions, a simple thresholding of the sample covariance matrix will result in the same sparsity pattern as the optimal solution of the GL. These conditions have been modified in [6] to depend only on the sample covariance matrix (and not the optimal solution of the GL). Based on this equivalence, [6] introduces a closedform solution for the GL, the exactness of which depends on the sparsity structure of the thresholded sample covariance matrix. In another line of work, [11] and [12] consider the disjoint components of the thresholded sample covariance matrix and show that the GL can be solved independently for each of the disjoint components. Although this result does not require additional conditions on the structure, its applicability is limited since it does not reveal any information about the connectivity of the sparsity graph corresponding to the optimal solution of the GL.Ia Problem Formulation
Consider a random vector
with an underlying multivariate Gaussian distribution. Let
denote the covariance matrix of this random vector. Without loss of generality, we assume that has a zero mean. The goal is to estimate the entries of based on independent samples of . The sparsity pattern ofdetermines which random variables in
are conditionally independent. In particular, if the entry of is zero, it means that and are conditionally independent, given the remaining entries of (the value of this entry is proportional to the partial correlation between and ). In this paper, we assume that is sparse and nonsingular. The problem of studying the conditional independence of different entries of is hard in practice due to the fact that the true covariance matrix is rarely known a priori. Therefore, the sample covariance matrix must instead be used to estimate the true covariance matrix. Let denote the sample covariance matrix. To estimate , consider the optimization problem(1) 
The optimal solution of the above problem is equal to . However, the number of available samples in many applications is smaller than the dimension of . This makes illconditioned or even singular, which would lead to large or unbounded entries for the optimal solution of (1). Furthermore, although is assumed to be sparse, a small difference between and would potentially make highly dense. In an effort to address the aforementioned issues, consider the regularized version of (1)
(2) 
where is a regularization coefficient. Let the optimal solution of (2) be denoted by . This problem is known as Graphical Lasso (GL). The term in the objective function is defined as the summation of the absolute values of the offdiagonal entries in . This additional penalty acts as a surrogate for promoting sparsity in the offdiagonal elements of , while ensuring that the problem is welldefined even with a singular input .
It is wellknown that the GL is computationally prohibitive for largescale problems. One way to circumvent the problem of solving the highlycomplex GL is to simply threshold the sample covariance matrix in order to obtain a candidate structure for the sparsity pattern of the optimal solution to the GL. It has been shown in several realworld problems, including brain connectivity networks and topology identification of electrical circuits [3, 10], that the thresholding method can correctly identify the nonzero pattern of . Recently, we have shown that the sparsity structure of the thresholding method coincides with that of the GL [6] under some conditions on the sample covariance matrix. Although these conditions are not easy to verify, it is shown that they are generically satisfied when a sparse solution for the GL is sought, or equivalently, when the regularization parameter in (2) is large. Based on this observation, [6] derives a closedform solution for the GL problem that is globally or nearglobally optimal for the GL, depending on the structure of the sample covariance matrix.
In this paper, we generalize the results of [6] to the cases where the thresholded sample covariance matrix has a chordal structure. A matrix has a chordal structure if every cycle in its support graph with length of at least 4 has a chord. Clearly, this class of sparsity structures includes acyclic graphs. First, we revisit the conditions introduced in [6] for the equivalence of the sparsity structures found using the GL and the simple method of thresholding. We show that the conditions for this equivalence can be significantly simplified when the support graph of the thresholded covariance matrix is chordal. Furthermore, we show that under some mild assumptions, these conditions are automatically satisfied as the size of the sample covariance matrix grows, provided that they possess sparse and chordal structures. In the second part of the paper, we generalize the closedform solution of the GL with acyclic thresholded sample covariance matrix to those with chordal structures. More specifically, we show that can be obtained using a closedform recursive formula when the thresholded sample covariance matrix has a chordal structure. As it is pointed out in [11], most of the numerical algorithms for solving the GL has the worst case complexity of at least . We show that the recursive formula requires a number of iterations growing linearly in the size of the problem, and that the complexity of each iteration is dependent on the size of the maximum clique in the sparsity graph. Therefore, given the thresholded sample covariance matrix (which can be obtained while constructing the sample covariance matrix), the complexity of solving the GL for sparse and chordal structures reduces from to . In fact, we show the graceful scalability of the proposed recursive method in largescale problems. Specifically, we show that, on average, the proposed method outperforms the best known algorithm for solving the GL by at least a factor of 16 with the sample correlation sizes between and .
The Graphical Lasso technique is commonlyused for estimating the inverse covariance matrices of Gaussian distributions. However, a similar learning method can be employed for data samples with more general underlying distributions. More precisely, the GL corresponds to the minimization of the regularized logdeterminant Bregman divergence, which is a widelyused metric for measuring the distance between the true and estimated parameters of a problem [13]. Therefore, the theoretical results developed in this paper are applicable to more general inference problems.
Notations: , , , and are used to denote the sets of real vectors, symmetric matrices, positivesemidefinite matrices, and positivedefinite matrices, respectively. The symbols and refer to the trace and the logarithm of the determinant of the matrix , respectively. The and entries of the vector and matrix are denoted by and , respectively. refers to an identity matrix. The sign of a scalar is shown by . For a set , refers to its cardinality. The inequality () means that is positive(semi)definite. For a graph , denotes the set of neighbors of node . Given a vector and matrix , define
(3)  
(4)  
(5) 
An index set is a sorted subset of the integers . The number of elements in the index set is denoted by . Given index sets and , we define
A sparsity pattern is a symmetric binary matrix. A matrix (not necessarily symmetric) is said to have sparsity pattern if whenever ; the set (resp. ) refers to the matrices (resp. symmetric matrices) with sparsity pattern . The Euclidean projection onto sparsity pattern is denoted as : the element of is zero if , and if .
Ii Preliminaries
In this section, we review the properties of sparse and chordal matrices and their connection to the maxdet matrix completion problem.
Iia Sparse Cholesky factorization
Consider solving a symmetric positive definite linear system
by Gaussian elimination. The standard procedure comprises a factorization step, where is decomposed into the (unique) Cholesky factor matrices , in which is diagonal and is lowertriangular with a unit diagonal, and a substitution step, where the two triangular systems of linear equations and are solved to yield .
In the case where is sparse, the Cholesky factor is often also sparse. It is common to store the sparsity pattern of in the compressed column storage format: a set of indices in which
(6) 
encodes the locations of offdiagonal nonzeros in the column of . (The diagonal elements are not included because the matrix has a unit diagonal by definition).
After storage has been allocated and the sparsity structure determined, the numerical values of and are computed using a sparse Cholesky factorization algorithm. This requires the use of the associated elimination tree , which is a rooted tree (or forest) on vertices, with edges defined to connect each vertex to its parent at the vertex (except root nodes, which have “0” as their parent), as in
(7) 
in which indicates the (numerically) smallest index in the index set [14]. The elimination tree encodes the dependency information between different columns of , thereby allowing information to be passed without explicitly forming the matrix.
IiB Chordal sparsity patterns
The support or sparsity graph of , denoted by , is defined as a graph with the vertex set and the edge set , where if and only if and . The pattern is said to be chordal if its graph does not contain an induced cycle with length greater than three. If is not chordal, then we may add nonzeros to it until it becomes chordal; the resulting pattern is called a chordal completion (or chordal embedding or triangulation) of . Any chordal completion with at most nonzeros is a sparse chordal completion of .
A sparsity pattern is said to factor without fill if every with sparsity pattern can be factored into such that also has sparsity pattern . If a sparsity pattern factors without fill, then it is chordal. Conversely, if a sparsity pattern is chordal, then there exists a permutation matrix such that factors without fill [15]. This permutation matrix is called the perfect elimination ordering of the chordal sparsity pattern .
IiC Recursive solution of the maxdet problem.
An important application of chordal sparsity is the efficient solution of the maximum determinant matrix completion problem, written
(8)  
subject to 
for a given largeandsparse matrix with sparsity pattern . The optimal solution of the above optimization (when it exists) is called the maxdet matrix completion of , and is unique. The Lagrangian dual of this problem is the following
(9)  
subject to 
with firstorder optimality condition
(10) 
Strong duality gives a straightforward relation back to the primal
(11) 
Note that while is (in general) a dense matrix, is always sparse. Instead of attempting to solve the primal problem (8) for a dense matrix, we may opt to solve the dual problem (9) for a sparse matrix satisfying the optimality condition (10). In the case that the sparsity pattern factors without fill, [16] showed that (10) is actually a linear system of equations over the Cholesky factor and of the solution ; their numerical values can be explicitly computed using a recursive formula.
Algorithm 1
([16], Algorithm 4.2)
Input. Matrix that has a positive definite
completion.
Output. The Cholesky factors and of
that satisfy .
Algorithm. Iterate over in reverse,
i.e. starting from and ending in . For each , compute
and the column of from
and compute the update matrices
for each satisfying , i.e. each child of in the elimination tree.
Of course, if the sparsity pattern is chordal, then we may find the perfect elimination ordering in linear time [17], and apply the above algorithm to the matrix , whose sparsity pattern does indeed factor without fill.
The algorithm takes steps, and the step requires a size linear solve and vectorvector product. The treewidth of the sparsity graph is defined as and has the interpretation of the largest clique in minus one. Combined, the algorithm has time complexity . This means that the matrix completion algorithm is lineartime if the treewidth of is in the order of .
Iii Main Results
To streamline the presentation, with no loss of generality we assume that used in (2) is the sample correlation matrix. This means that the diagonal elements of are normalized to 1 and the offdiagonal elements are between and . The results of this paper can readily be generalized to an arbitrary sample covariance matrix after appropriate rescaling. The following definitions are borrowed from [6].
Definition 1
A matrix is called inverseconsistent if there exists a matrix with zero diagonal elements such that
(12a)  
(12b)  
(12c) 
where is the complement of . The matrix is called an inverseconsistent complement of and is denoted as .
Moreover, is called signconsistent if the entries of and are nonzero and have opposite signs for every .
Example 1
Consider the matrix:
(13) 
We show that is both inverse and signconsistent. Consider the matrix defined as
(14) 
can be written as
(15) 
Note that:

is positivedefinite.

The sparsity graph of is the complement of that of .

The sparsity graphs of and are equivalent.

The nonzero offdiagonal entries of and have opposite signs.
Therefore, it can be inferred that is both inverse and signconsistent, and is its inverseconsistent complement.
In [6], it has been shown that every positive definite matrix has a unique inverseconsistent complement.
Definition 2
Given a graph and a scalar , define as the maximum of over all inverseconsistent positivedefinite matrices with the diagonal entries equal to 1 such that and .
Without loss of generality and due to the nonsingularity of , one can assume that all elements of are nonzero. Let be the sorted upper triangular entries of such that
(16) 
Definition 3
Define the residue of at level relative to as a matrix whose entry is equal to if , and equal to 0 otherwise.
Notice that is the softthresholded sample correlation matrix with threshold . For simplicity of notation, we omit the arguments and in whenever the equivalence is implied from the context. The following theorem is borrowed from [6].
Theorem 1
The thresholding method and the GL have the same sparsity patterns if the following conditions are satisfied for :

Condition 1i: is positive definite.

Condition 1ii: is signconsistent.

Condition 1iii: The relation
(17)
In [6], it is pointed out that the aforementioned conditions in Theorem 1 are expected to hold if a sparse solution for the GL problem is sought. However, efficient verification of these conditions is yet to be addressed in practice. It has been observed that the last condition plays the most important role in verifying the optimality conditions for the sparsity pattern of the thresholded sample correlation matrix.
Iiia Upper Bound for in Chordal Graphs
In what follows, we derive an upper bound on for chordal graphs and show that under some mild assumptions, Condition 1iii is satisfied as the size of the sample correlation matrix grows.
Theorem 2
Suppose that the following conditions hold:

Condition 2i: is chordal.

Condition 2ii: .

Condition 2iii:
Then, we have
(18) 
Proof 1
The proof is provided in Appendix.
Conditions 2ii and 2iii are guaranteed to be satisfied for small values of . In such circumstances, chordal structure for is enough to verify the validity of (18). Based on this theorem, the next corollary shows that the Condition 1iii in Theorem 1 is guaranteed to be satisfied when the support graph of the sample correlation matrix is sparse and large enough.
Corollary 1
Suppose that the following conditions hold for some and :
(19a)  
(19b)  
(19c)  
(19d) 
Then, there exists a such that for every , the Condition 1iii in Theorem 1 is satisfied.
Proof 2
The proof is provided in Appendix.
Corollary 1 implies that, if the sample correlation matrix is large enough and the rate of decrease in , as a function of the dimension of the data, is not much smaller than that of , then, Condition 1iii is automatically satisfied. For instance, suppose that in (19a). Then, Corollary 1 implies that for large enough values of , Condition 1iii is satisfied if and there exists a such that .
Remark 1
Although (18) only holds for chordal graphs, one can generalize Theorem 2 to nonchordal graphs, under some additional assumptions. In particular, suppose that is the chordal completion of a nonchordal graph . Then, one can easily verify that (18) holds for if . Indeed, we could show that the monotonic behavior of is maintained under fairly general conditions. Due to the space restrictions, this generalization is not included in this paper.
IiiB MaxDet Matrix Completion for Graphical Lasso
In this subsection, we show that if the equivalence between the thresholding method and GL holds, the optimal solution of the GL can be obtained using Algorithm 1.
Theorem 3
Assume that the thresholded sample correlation matrix has the same sparsity pattern as the optimal solution of the GL and . Then,

is the unique maxdet completion of .

Algorithm 1 can be used to find if is chordal.
Proof 3
The proof is provided in Appendix.
Recall that the main goal of the GL is to promote the sparsity structure of the inverse correlation matrix. In order to obtain a sparse solution, the regularization coefficient should be large, relative to the absolute values of the offdiagonal elements in the sample correlation matrix. Under such circumstances, the conditions delineated in Theorem 1 are satisfied and the sparsity pattern of the simple thresholding technique corresponds to that of the GL. Theorem 3 uses this result to show that for large values of , instead of merely identifying its sparsity structure, the thresholded sample correlation matrix can be further exploited to find the optimal solution of the GL problem by solving its corresponding maxdet matrix completion problem. Note that the first part of Theorem 3 is independent of the structure of the thresholded sample correlation matrix. However, the second part of the theorem suggests that solving its corresponding maxdet matrix completion problem can be much easier than the GL problem and can be performed in linear time using Algorithm 1 when the thresholded sample correlation matrix has a sparse and chordal structure.
While the focus of this paper is on the thresholded sample correlation matrices with chordal structures, the presented method may be extended to matrices with nonchordal sparsity patterns. Note that for nonchordal structures, the provided recursive formula does not necessarily result in the optimal solution. However, it has been shown in [18] that efficient implementations of Newton and conjugate gradient methods for maxdet matrix completion problem are possible when the sparsity structure of the problem has a sparse chordal completion. The detailed analysis of this extension is left as future work. Furthermore, as it is pointed out in [11] and [12], the disjoint components of the sparsity graph induced by the thresholding of sample correlation matrix can be treated independently since the GL is decomposed into multiple smaller size problems over these disjoint components. Therefore, the proposed method can be applied to every chordal component even if the overall sparsity graph of the thresholded sample correlation matrix does not benefit from a chordal structure.
Iv Numerical Results
recursive  QUIC  GLASSO  
test case  matrix size  max clique size  run. time  run. time  opt. gap  speedup  run. time  opt. gap  speedup 
fpgadcop01  1220  10  0.17  0.46  2.71  45.93  99.85  
west1505  1505  338  0.89  9.56  10.74  137.59  154.60  
netscience  1589  20  0.21  0.91  4.33  108.04  528.53  
lung1  1650  4  0.24  1.70  7.08  93.63  368.50  
cryg2500  2500  75  0.51  6.39  12.53  446.23  892.47  
freeFlyingRobot7  3918  35  0.86  18.13  21.08  2066.80  2319.98  
freeFlyingRobot14  5985  35  1.53  40.54  26.50  
CASE13659PEGASE  13659  35  5.34  260.04  48.70  
OPF6000  29902  52  78.97  

Using the method proposed in this paper, we solve the GL problem on various largescale problems whose thresholded sample correlation matrices have chordal structures. All the test cases are collected from the SuiteSparse Matrix Collection [19] and MATPOWER package [20, 21]. These are publicly available and widelyused datasets for largeandsparse matrices from realworld problems. The simulations are run on a laptop computer with an Intel Core i7 quadcore 2.50 GHz CPU and 16GB RAM. The results reported in this section are for a serial implementation in MATLAB.
Iva Data Generation
For each test case, we take the following steps to design the sample correlation matrix.

First, the nonzero structure of the matrix for a given test case is exploited and a Symbolic Cholesky Factorization is performed to arrive at a chordal embedding of the given structure [22]. In other words, we augment the sparsity graph corresponding to the considered test case with additional edges to obtain a sparse chordal completion of the graph.

The elements of the sample correlation matrix corresponding to the edges in the extended graph are chosen randomly from the union of the intervals and . The rest of the elements are randomly chosen from the interval .

The diagonal elements of the sample correlation matrix are elevated according to the offdiagonal elements in order to make the sample correlation matrix positive semidefinite. The resulted matrix is normalized, if necessary.
IvB Discussion
We consider different test cases corresponding to various realworld problems in materials science, power networks, circuit simulation, optimal control, fluid dynamics, social networks, and chemical process simulation. The size of the variable matrices in the GL problem for the investigated problems ranges from (with approximately 700 thousands variable elements) to (with approximately 447 million variable elements). We compare the running time and objective function of our proposed method with two other stateoftheart algorithms, namely the GLASSO [8] and QUIC [23] algorithms (downloaded from http://statweb.stanford.edu/tibs/glasso/ and http://bigdata.ices.utexas.edu/software/1035/, respectively). The GLASSO algorithm is the most widelyused algorithm for the GL, while the QUIC algorithm is commonly regarded as the fastest solver for the GL. Define the relative optimality gap as the normalized difference between the objective functions of the proposed solution and the solutions that are obtained by the other two methods. We consider a 2hour time limit for the solvers.
Table I shows the results of our simulations. It can be observed that the proposed recursive method significantly outperforms the GLASSO and QUIC algorithms in terms of running time, while achieving a negligible relative optimality gap in most of the test cases. More specifically, for the first 8 cases, the proposed recursive method is times faster than the QUIC. For the largest test case, QUIC does not find the optimal solution within the 2hour time limit while the proposed recursive formula obtains the optimal solution in less than 2 minutes. Furthermore, the recursive method is 726 times faster than GLASSO algorithm for the first 6 test cases. However, this algorithm does not find the optimal solution for the 3 largest test cases within the 2hour time limit.
V Conclusions
In many graphical learning problems, the goal is to obtain a sparsity graph that describes the conditional independence of different elements in the available dataset via sparse inverse covariance estimation. The Graphical Lasso (GL) is one of the most commonlyused methods for addressing this problem. It is known that, in highdimensional settings, the GL is computationallyprohibitive. A cheap alternative method for finding the sparsity pattern of the inverse covariance matrix is a simple thresholding method performed on the sample covariance matrix of the data. Recently, we have provided sufficient conditions under which the thresholding is equivalent to the GL in terms of the sparsity pattern of the graphical model. Based on this result, we have shown that the GL has a closedform solution when the thresholded sample covariance matrix is acyclic. In this paper, this result is generalized to the problems where the thresholding results in a chordal structure. It is shown that the sufficient conditions for the equivalence of the thresholding and GL can be significantly simplified for chordal structures and is expected to hold as the dimension of the data increases. Furthermore, it is shown that the GL can be reduced to a maximum determinant matrix completion problem when the thresholding is equivalent to the GL, and for chordal structures, the corresponding matrix completion problem has a simple recursive formula. The performance of the derived recursive formula is compared with the other commonlyused methods and shown that, for the largescale GL problems, the proposed method significantly outperforms other methods in terms of their running times.
References
 [1] J. Garcke, M. Griebel, and M. Thess, “Data mining with sparse grids,” Computing, vol. 67, no. 3, pp. 225–253, 2001.

[2]
J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan, “Sparse representation for computer vision and pattern recognition,”
Proceedings of the IEEE, vol. 98, no. 6, pp. 1031–1044, 2010.  [3] S. Sojoudi and J. Doyle, “Study of the brain functional network using synthetic data,” 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 350–357, 2014.
 [4] M. Fardad, F. Lin, and M. R. Jovanović, “Sparsitypromoting optimal control for a class of distributed systems,” American Control Conference, pp. 2050–2055, 2011.
 [5] S. Fattahi and J. Lavaei, “On the convexity of optimal decentralized control problem and sparsity path,” American Control Conference, 2017.
 [6] S. Fattahi and S. Sojoudi, “Graphical lasso and thresholding: Equivalence and closedform solutions,” https://arxiv.org/abs/1708.09479, 2017.
 [7] E. Candes and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems, vol. 23, no. 3, pp. 969–985, 2007.
 [8] J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–441, 2008.

[9]
O. Banerjee, L. E. Ghaoui, and A. d’Aspremont, “Model selection through sparse
maximum likelihood estimation for multivariate Gaussian or binary data,”
Journal of Machine learning research
, vol. 9, pp. 485–516, 2008.  [10] S. Sojoudi, “Equivalence of graphical lasso and thresholding for sparse graphs,” Journal of Machine Learning Research, vol. 17, no. 115, pp. 1–21, 2016.
 [11] R. Mazumder and T. Hastie, “Exact covariance thresholding into connected components for largescale graphical lasso,” Journal of Machine Learning Research, vol. 13, pp. 781–794, 2012.
 [12] D. M. Witten, J. H. Friedman, and N. Simon, “New insights and faster computations for the graphical lasso,” Journal of Computational and Graphical Statistics, vol. 20, no. 4, pp. 892–900, 2011.
 [13] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu, “Highdimensional covariance estimation by minimizing penalized logdeterminant divergence,” Electronic Journal of Statistics, vol. 5, pp. 935–980, 2011.
 [14] J. W. Liu, “The role of elimination trees in sparse factorization,” SIAM Journal on Matrix Analysis and Applications, vol. 11, no. 1, pp. 134–172, 1990.
 [15] D. Fulkerson and O. Gross, “Incidence matrices and interval graphs,” Pacific journal of mathematics, vol. 15, no. 3, pp. 835–855, 1965.
 [16] J. D. Andersen, Martin S. and L. Vandenberghe, “Logarithmic barriers for sparse matrix cones,” Optimization Methods and Software, vol. 28, no. 3, pp. 396–423, 2013.
 [17] L. Vandenberghe and M. S. Andersen, “Chordal graphs and semidefinite optimization,” Foundations and Trends in Optimization, vol. 1, no. 4, pp. 241–433, 2015.
 [18] L. V. Dahl, Joachim and V. Roychowdhury, “Covariance selection for nonchordal graphs via chordal embedding,” Optimization Methods Software, vol. 23, no. 4, pp. 501–520, 2008.
 [19] T. A. Davis and Y. Hu, “The university of florida sparse matrix collection,” ACM Transactions on Mathematical Software (TOMS), vol. 38, no. 1, p. 1, 2011.
 [20] R. D. Zimmerman, C. E. MurilloSánchez, and R. J. Thomas, “Matpower: Steadystate operations, planning, and analysis tools for power systems research and education,” IEEE Transactions on power systems, vol. 26, no. 1, pp. 12–19, 2011.
 [21] C. Coffrin, D. Gordon, and P. Scott, “Nesta, the nicta energy system test case archive,” arXiv preprint arXiv:1411.0359, 2014.
 [22] A. Agrawal, P. Klein, and R. Ravi, “Cutting down on fill using nested dissection: provably good elimination orderings,” Graph Theory and Sparse Matrix Computation, Springer, pp. 31–55, 1993.
 [23] C. J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar, “Quic: quadratic approximation for sparse inverse covariance estimation,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 2911–2947, 2014.
 [24] M. K. K. M. Fukuda, Mituhiro and K. Nakata, “Exploiting sparsity in semidefinite programming via matrix completion i: General framework,” SIAM Journal on Optimization, vol. 11, no. 3, pp. 647–674, 2001.
Comments
There are no comments yet.