Estimating Time-Varying Graphical Models

04/11/2018 ∙ by Jilei Yang, et al. ∙ University of California-Davis 0

In this paper, we study time-varying graphical models based on data measured over a temporal grid. Such models are motivated by the needs to describe and understand evolving interacting relationships among a set of random variables in many real applications, for instance the study of how stocks interact with each other and how such interactions change over time. We propose a new model, LOcal Group Graphical Lasso Estimation (loggle), under the assumption that the graph topology changes gradually over time. Specifically, loggle uses a novel local group-lasso type penalty to efficiently incorporate information from neighboring time points and to impose structural smoothness of the graphs. We implement an ADMM based algorithm to fit the loggle model. This algorithm utilizes blockwise fast computation and pseudo-likelihood approximation to improve computational efficiency. An R package loggle has also been developed. We evaluate the performance of loggle by simulation experiments. We also apply loggle to S&P 500 stock price data and demonstrate that loggle is able to reveal the interacting relationships among stocks and among industrial sectors in a time period that covers the recent global financial crisis.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


An R Package for Estimating Time-Varying Graphical Models

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, there are many problems where the study of the interacting relationships among a large number of variables is of interest. One popular approach is to characterize interactions as conditional dependencies: Two variables are interacting with each other if they are conditionally dependent given the rest of the variables. An advantage of using conditional dependency instead of marginal dependency (e.g., through correlation) is that we are aiming for more direct interactions after taking out the effects of the rest of the variables. Moreover, if the random variables follow a multivariate normal distribution, then the elements of the inverse covariance matrix

(a.k.a. precision matrix) would indicate the presence/absence of such interactions. This is because under normality, two variables are conditionally dependent given the rest of the variables if and only if the corresponding element of the precision matrix is nonzero. Furthermore, we can represent such interactions by a graph , where the node set represents the random variables of interest and the edge set consists of pairs where the th element of is nonzero. Such models are referred to as Gaussian graphical models (GGM).

Many methods have been proposed to learn GGMs when the number of variables is large (relative to the sample size), including Meinshausen & Bühlmann (2006), Yuan & Lin (2007), Friedman et al. (2008), Banerjee et al. (2008), Rothman et al. (2008), Peng et al. (2009), Lam & Fan (2009), Ravikumar et al. (2011), Cai et al. (2011). These methods rely on the sparsity assumption, i.e., only a small subset of elements in the precision matrix is nonzero, to deal with challenges posed by high-dimension-low-sample-size.

The aforementioned methods learn a single graph based on the observed data. However, when data are observed over a temporal or spatial grid, the underlying graphs might change over time/space. For example, the relationships among stocks could evolve over time as illustrated by Figure 2. If we had described them by a single graph, the results would be misleading. This necessitates the study of time-varying graphical models.

When the graphs/covariance matrices change over time, the observations would not be identically distributed anymore. To deal with this challenge, one approach is to assume the covariance matrices change smoothly over time. For example, in Zhou et al. (2010), Song et al. (2009), Kolar et al. (2010), Kolar & Xing (2012), Wang & Kolar (2014), Monti et al. (2014), Gibberd & Nelson (2014), Gibberd & Nelson (2017), kernel estimates of the covariance matrices are used in the objective functions. However, smoothness of the covariance matrix alone does not tell us how the graph topology would evolve over time, despite in practice this is often of more interests than estimating the covariance matrices. Moreover, imposing certain assumption on how the graph topology changes over time could greatly facilitate interpretation and consequently provide insights about the interacting relationships.

One type of time-varying graphical models utilize fused-lasso type penalties (Ahmed & Xing, 2009; Kolar et al., 2010; Kolar & Xing, 2012; Monti et al., 2014; Gibberd & Nelson, 2014; Wit & Abbruzzo, 2015; Gibberd & Nelson, 2017; Hallac et al., 2017), such that the estimated graph topology would be piecewise constant. This is particularly convenient when we are primarily interested in detecting jump points and abrupt changes.

In this paper, we consider the structural smoothness assumption, which assumes that the graph topology is gradually changing over time. For this purpose, we propose LOcal Group Graphical Lasso Estimation (loggle), a novel time-varying graphical model that imposes structural smoothness through a local group-lasso type penalty. The loggle method is able to efficiently utilize neighborhood information and is also able to adapt to the local degree of smoothness in a data driven fashion. Consequently, the loggle method is flexible and effective for a wide range of scenarios including models with both time-varying and time-invariant graphs. Moreover, we implement an ADMM based algorithm that utilizes blockwise fast computation and pseudo-likelihood approximation to achieve computational efficiency and we use cross-validation to select tuning parameters. We demonstrate the performance of loggle through a simulation study. Finally, we apply loggle to the S&P 500 stock price data to reveal how interactions among stocks and among industrial sectors evolve during the recent global financial crisis. An R package loggle has also been developed.

The rest of the paper is organized as follows. In Section 2, we introduce the loggle model, model fitting algorithms and strategies for model tuning. In Section 3, we present simulation results to demonstrate the performance of loggle and compare it with two existing methods. We report the application on S&P 500 stock price data in Section 4, followed by conclusions in Section 5. Technical details are in an Appendix. Additional details are deferred to a Supplementary Material.

2 Methods

2.1 Local Group Graphical Lasso Estimation

In this section, we introduce loggle (LOcal Group Graphical Lasso Estimation) for time-varying graphical models.

Let be a

-dimensional Gaussian random vector indexed by

. We assume ’s are independent across . We also assume the mean function and the covariance function are smooth in . We denote the observations by where , is a realization of () and . For simplicity, we assume that the observations are centered so that is drawn from . In practice, we can achieve this by subtracting the estimated mean from . See Section S.1.3 of the Supplementary Material for details.

Our goal is to estimate the precision matrix based on the observed data and then construct the edge set (equiv. the graph topology) based on the sparsity pattern of the estimated precision matrix . We further assume that the edge set (equiv. the graph topology) changes gradually over time.

To estimate the precision matrix at the th observed time point, we propose to minimize a locally weighted negative log-likelihood function with a local group-lasso penalty:


where denotes the indices of the time points centered around with neighborhood width and denotes the number of elements in ; denotes the set of precision matrices within this neighborhood and denotes the -th element of ; is the kernel estimate of the covariance matrix at time , where the weights , is a symmetric nonnegative kernel function and is the bandwidth.

We obtain:

and set as the estimated precision matrix at time . Since the group-lasso type penalty is likely to over-shrink the elements in the precision matrix, we further perform model refitting by maximizing the weighted log-likelihood function under the constraint of the estimated edge set (equiv. sparsity pattern). We denote the refitted estimate by . Note that, the precision matrix may be estimated at any time point : If , then choose an integer and define , and . For simplicity of exposition, throughout we describe the loggle fits at observed time points.

The use of the kernel estimate is justified by the assumption that the covariance matrix is smooth in . This allows us to borrow information from neighboring time points. In practice, we often replace the kernel smoothed covariance matrices by kernel smoothed correlation matrices which amounts to data standardization.

The penalty term is a group-lasso type sparse regularizer (Yuan & Lin, 2006; Danaher et al., 2014) that makes the graph topology change smoothly over time. The degree of such smoothness is controlled by the tuning parameter , the larger the neighborhood width , the more gradually the graph topology would change. The overall sparsity of the graphs is controlled by the tuning parameter , the larger the , the sparser the graphs tend to be. The factor in equation (1) is to make comparable for different .

The loggle model includes two existing time-varying graphical models as special cases. Specifically, in Zhou et al. (2010), is estimated by minimizing a weighted negative log-likelihood function with a lasso penalty:

which is a special case of loggle by setting . This method utilizes the smoothness of the covariance matrix by introducing the kernel estimate in the likelihood function. However, it ignores potential structural smoothness of the graphs and thus might not utilize the data most efficiently. Hereafter, we refer to this method as kernel.

On the other hand, Wang & Kolar (2014) propose to use a (global) group-lasso penalty to estimate ’s simultaneously:

This is another special case of loggle by setting large enough to cover the entire time interval (e.g., ). The (global) group-lasso penalty makes the estimated precision matrices have the same sparsity pattern (equiv. same graph topology) across the entire time domain. This could be too restrictive for many applications where the graph topology is expected to change over time. Hereafter, we refer to this method as invar.

2.2 Model Fitting

Minimizing the objective function in equation (1) with respect to is a convex optimization problem. This can be solved by an ADMM (alternating directions method of multipliers) algorithm (See details in Section S.1.1 of the Supplementary Material). ADMM algorithms can converge to the global optimum for convex optimization problems under very mild conditions. A comprehensive introduction can be found in Boyd et al. (2011). However, this ADMM algorithm involves eigen-decompositions of matrices (each corresponding to a time point in the neighborhood) in every iteration, which is computationally very expensive when is large. In the following, we propose a fast blockwise algorithm as well as pseudo-likelihood type approximation of the objective function to speed up the computation.

Fast blockwise algorithm

If the solution is block diagonal (after suitable permutation of the variables), then we can apply the ADMM algorithm to each block separately, and consequently reduce the computational complexity from to , where ’s are the block sizes and .

We establish the following theorems when there are two blocks. These results follow similar results in Witten et al. (2011) and Danaher et al. (2014) and can be easily extended to an arbitrary number of blocks.

Theorem 1.

Suppose the solution of minimization of (1) with respect to has the following form (after appropriate variable permutation):

where all ’s have the same dimension. Then and can be obtained by minimizing (1) on the respective sets of variables separately.

Theorem 2.

Let be a non-overlapping partition of the variables. A necessary and sufficient condition for the variables in to be completely disconnected from those in in all estimated precision matrices through minimizing (1) is:

The proof of Theorem 1 is straightforward through inspecting the Karush-Kuhn-Tucker (KKT) condition of the optimization problem of minimizing (1). The proof of Theorem 2 is given in Appendix A.1.

Based on Theorem 2, we propose the following fast blockwise ADMM algorithm:

  1. Create a adjacency matrix . For , set the off-diagonal elements if ; and , if otherwise.

  2. Identify the connected components, , given the adjancency matrix . Denote their sizes by ().

  3. For , if , i.e., contains only one variable, say the th variable, then set for ; If , then apply the ADMM algorithm to the variables in to obtain the corresponding .

Pseudo-likelihood approximation

Even with the fast blockwise algorithm, the computational cost could still be high due to the eigen-decompositions. In the following, we propose a pseudo-likelihood approximation to speed up step (iii) of this algorithm. In practice, this approximation has been able to reduce computational cost by as much as 90%. For simplicity of exposition, the description is based on the entire set of the variables.

The proposed approximation is based on the following well known fact that relates the elements of the precision matrix to the coefficients of regressing one variable to the rest of the variables (Meinshausen & Bühlmann, 2006; Peng et al., 2009). Suppose a random vector has mean zero and covariance matrix . Denote the precision matrix by . If we write , where the residual is uncorrelated with , then . Note that, if and only if . Therefore identifying the sparsity pattern of the precision matrix is equivalent to identifying sparsity pattern of the regression coefficients.

We consider minimizing the following local group-lasso penalized weighted loss function for estimating :


where is the set of ’s within the neighborhood centered around with neighborhood width ; is the sequence of the th variable in observations and is a weight matrix. The -norm of a vector is defined as . Once is obtained through minimizing (2) with respect to , we can derive the estimated edge set at : .

The objective function (2) may be viewed as an approximation of the likelihood based objective function (1) through the aforementioned regression connection by ignoring the correlation among the residuals ’s. We refer to this approximation as the pseudo-likelihood approximation. However, minimizing (2) cannot guarantee symmetry of edge selection, i.e., and being simultaneously zero or nonzero. To achieve this, we modify (2) by using a paired group-lasso penalty (Friedman et al., 2010):


The paired group-lasso penalty guarantees simultaneous selection of and .

The objective function (3) can be rewritten as:


where is an vector with being an vector (); is an matrix, with being an vector, where is in the th block (); and is a vector.

We implement an ADMM algorithm to minimize (4), which does not involve eigen-decomposition and thus is much faster than the ADMM algorithm for minimizing the original likelihood based objective function (1). This is because, the loss used in (3) and (4) is quadratic in the parameters as opposed to the negative log-likelihood loss used in (1) which has a log-determinant term. Moreover, is actually a block diagonal matrix: , where is an matrix. Therefore, computations can be done in a blockwise fashion and potentially can be parallelized. The detailed algorithm is given in Appendix A.2.

2.3 Model Tuning

In the loggle model, there are three tuning parameters, namely, – the kernel bandwidth (for ’s), – the neighborhood width (for ’s) and – the sparsity parameter. In the following, we describe -fold cross-validation (CV) to choose these parameters.

Recall that observations are made on a temporal grid. So we create the validation sets by including every th data point and the corresponding training set would be the rest of the data points. E.g., for , the 1st validation set would include observations at , the 2nd validation set would include those at , etc. In the following, let denote the indices of the time points in the th validation set and denote those in the th training set ().

Let denote the tuning grids from which , and , respectively, are chosen. See Section 3 for an example of the tuning grids. We recommend to choose and separately for each as the degrees of sparsity and smoothness of the graph topology may vary over time. On the other hand, we recommend to choose a common for all time points.

Given time and , for , we obtain the refitted estimate by applying loggle to the th training set (). As mentioned in Section 2.1, this can be done even if . We then derive the validation score on the th validation set:

where is the kernel estimate of the covariance matrix based on the th validation set . Here, the bandwidth is set to be to reflect the difference in sample sizes between the validation and training sets. Finally, the -fold cross-validation score at time is defined as: . The “optimal” tuning parameters at given , , is the pair that minimizes the CV score. Finally, the “optimal” is chosen by minimizing the sum of over those time points where a loggle model is fitted.

We also adopt the procedure proposed in Peng et al. (2010) which has been shown to be able to significantly reduce the false discovery rate while sacrifice only modestly in power. Specifically, given the CV selected tuning parameters, we examine the fitted model on each training set and only retain those edges that appear in at least T% of these models. In practice, we recommend 80% as the cut off value for edge retention.

Moreover, we implement efficient grid search strategies including early stopping and coarse search followed by refined search to further speed up the computation. Details can be found in Section S.2 of the Supplementary Material.

3 Simulation

In this section, we evaluate the performance of loggle and compare it with kernel and invar by simulation experiments.

3.1 Setting

We consider models with both time-varying graphs and time-invariant graphs:

  • Time-varying graphs: (i) Generate four lower triangular matrices with elements independently drawn from . (ii) Let , , and , , and set . (iii) Define and “soft threshold” its off-diagonal elements to obtain : , where . (iv) Add to the diagonal elements of to ensure positive definiteness.

  • Time-invariant graphs: (i) Generate an Erdos-Renyi graph (Erdös & Rényi, 1959)

    where each pair of nodes is connected independently with probability

    (so the total number of edges is around ). Denote the edge set of this graph by . (ii) For off-diagonal elements ( ), if , set for ; If , set , where is a random offset. (iii) For diagonal elements (), set , where is a random offset.

We construct three models following the above descriptions. Specifically, two models have time-varying graphs with and

nodes, respectively. In these two models, the graphs change smoothly over time with the average number of edges being 51.6 (standard deviation 6.0) for

model and 203.0 (standard deviation 66.8) for model. The plots depicting the number of edges vs. time are given in Figure S.1 of the Supplementary Material. In the third model, the graphs are time-invariant (even though the precision matrices change over time) with nodes and 93 fixed edges.

For each model, we generate , with (). We use the Epanechnikov kernel to obtain smoothed estimates of the correlation matrices. In the following, we consider observations and conduct model fitting on time points at .

We use 5-fold cross-validation for tuning parameters selection from 0.1, 0.15, , 0.3, 0, 0.001, 0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2, 0.25, 0.3, 1 and 0.15, 0.17, , 0.35.

The metrics used for performance evaluation include false discovery rate: and for edge detection (averaged over the time points where graphs are estimated), where and are the true edge set and the estimated edge set at time point , respectively. We also consider as an overall metric for model selection performance which strikes a balance between FDR and power: The larger is, the better a method performs in terms of edge selection. In addition, we calculate the Kullback-Leibler (K-L) divergence (relative entropy) between the true models and the estimated models: .

3.2 Results

time-varying graphs model
Method FDR power
loggle 0.196 0.702 0.747 2.284
kernel 0.063 0.571 0.703 2.690
invar 0.583 0.678 0.514 2.565

time-varying graphs model
Method FDR power
loggle 0.215 0.613 0.678 9.564
kernel 0.035 0.399 0.561 11.818
invar 0.590 0.597 0.478 10.608
time-invariant graphs model
Method FDR power
loggle 0.000 0.978 0.988 1.559
kernel 0.042 0.509 0.598 3.168
invar 0.000 1.000 1.000 1.531
Table 1: Simulation Results.

Table 1 shows that under the time-varying graphs models, loggle outperforms kernel according to score and K-L divergence. Not surprisingly, invar performs very poorly for time-varying graphs models. On the other hand, under the time-invariant graphs model, loggle performs similarly as invar, whereas kernel performs very poorly. These results demonstrate that loggle can adapt to different degrees of smoothness of the graph topology in a data driven fashion and has generally good performance across a wide range of scenarios including both time-varying and time-invariant graphs.

The loggle procedure is more computationally intensive than kernel and invar as it fits many more models. For the time-varying graphs model, loggle took 3750 seconds using 25 cores on a linux server with 72 cores, 256GB RAM and two Intel Xeon E5-2699 v3 @ 2.30GHz processors. At the same time, kernel took 226 seconds and invar took 777 seconds. On average, per loggle model fit took 23.2 milliseconds (ms), per kernel model fit took 16.8ms and per invar model fit took 2825.5 ms. The additional computational cost of loggle is justified by its superior performance and should become less of a burden with fast growth of computational power.

4 S&P 500 Stock Price

In this section, we apply loggle to the S&P 500 stock price dataset obtained via R package quantmod from We focus on 283 stocks from 5 Global Industry Classification Standard (GICS) sectors: 58 stocks from Information Technology, 72 stocks from Consumer Discretionary, 32 stocks from Consumer Staples, 59 stocks from Financials, and 62 stocks from Industrials. We are interested in elucidating how direct interactions (characterized by conditional dependencies) among these stocks are evolving over time and particularly how such interactions are affected by the recent global financial crisis.

For this purpose, we consider a 4-year time period from January 1st, 2007 to January 1st, 2011, which covers the recent global financial crisis: “According to the U.S. National Bureau of Economic Research, the recession, as experienced in that country, began in December 2007 and ended in June 2009, thus extending over 19 months. The Great Recession was related to the financial crisis of 2007-2008 and U.S. subprime mortgage crisis of 2007-2009 (Source: wikipedia)”. Each stock has 1008 closing prices during this period, denoted by . We use the logarithm of the ratio between two adjacent prices, i.e., for the subsequent analysis. We also convert the time points onto [0, 1] by for . By examining the autocorrelation (Figure S.2 of the Supplementary Material), the independence assumption appears to hold reasonably well.

We use the Epanechnikov kernel to obtain the kernel estimates of the correlation matrices. We then fit three models, namely, loggle, kernel and invar, at time points using 5-fold cross-validation for model tuning. We use the tuning grids , and , 0.001, 0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2, 0.25, 0.3, 1, where is pre-selected by using coarse search described in Section S.2 of the Supplementary Material. Table 2 reports the average number of edges across the fitted graphs (and standard deviations in parenthesis) as well as the CV scores. We can see that loggle has a significantly smaller CV score than those of kernel and invar. Moreover, on average, loggle and invar models have similar number of edges, whereas kernel models have more edges.

Method Average edge # (s.d.) CV score
loggle 819.4 (331.0) 123.06
kernel 1103.5 (487.1) 160.14
invar 811.0 (0.0) 130.68
Table 2: Stock price: Number of edges and CV score
(a) (b)
Figure 1: Stock Price. (a) Number of edges vs. time (b) Proportion of within-sector edges vs. time

Figure 1(a) shows the number of edges in the fitted graph over time. The invar fitted graphs have an identical topology, which is unable to reflect the evolving relationships among these stocks. On the other hand, both loggle and kernel are able to capture the changing relationships by fitting graphs with time-varying topologies. More specifically, both methods detect an increased amount of interaction (characterized by larger number of edges) during the financial crisis. The amount of interaction peaked around the end of 2008 and then went down to a level still higher than that of the pre-crisis period. As can be seen from the figure, the kernel graphs show rather drastic changes, whereas the loggle graphs change more gradually. The loggle method in addition detects a period with increased interaction in the early stage of the financial crisis, indicated by the smaller peak around October 2007 in Figure 1(a). This is likely due to the subprime mortgage crisis which acted as a precursor of the financial crisis (Amadeo, 2017). In the period after the financial crisis, the loggle fits are similar to those of invar with a nearly constant graph topology after March 2010, indicating that the relationships among the stocks had stabilized. In contrast, kernel fits show a small bump in edge number around the middle of 2010 and decreasing amount of interaction afterwards.

Figure 1(b) displays the proportion of within-sector edges among the total number of detected edges. During the entire time period, loggle fitted graphs consistently have higher proportion of within-sector edges than that of the kernel fitted graph. For both methods, this proportion decreased during the financial crisis due to increased amount of cross-sector interaction. For loggle, the within-sector edge proportion eventually increased and stabilized after March 2010, although at a level lower than that of the pre-crisis period. In contrast, for kernel, the within-sector proportion took a downturn again after October 2009. In summary, the loggle fitted graphs are easier to interpret in terms of describing the evolving interacting relationships among the stocks and identifying the underlying sector structure of the stocks. Hereafter, we focus our discussion on loggle fitted graphs.

(a) (b) (c)
(d) (e) (f)
Figure 2: Stock Price. (a)-(e) loggle fitted graphs at 5 time points. (f) The sector-wise percentage of presence of within-sector edges and the percentage of presence of cross-sector edges of loggle fitted graphs vs. time.

Figure 2(a)-(e) show the loggle fitted graphs at 5 different time points, namely, before, at the early stage, around the peak, towards the end and after the financial crisis. These graphs show clear evolving interacting patterns among the stocks. The amount of interaction increased with the deepened crisis and decreased and eventually stabilized with the passing of the crisis. Moreover, the stocks have more interactions after the crisis compared to the pre-crisis era, indicating fundamental change of the financial landscape. In addition, these graphs show clear sector-wise clusters (nodes with the same color corresponding to stocks from the same sector).

Figure 2(f) shows the sector-wise percentage of presence of within-sector edges, defined as the ratio between the number of detected within-sector edges and the total number of possible within-sector edges for a given sector; and the percentage of presence of cross-sector edges, defined as the ratio between the number of detected cross-sector edges and the total number of possible cross-sector edges. As can be seen from this figure, the within-sector percentages are much higher than the cross-sector percentage, reaffirming the observation that loggle is able to identify the underlying sector structure. Moreover, the within-Financials sector percentage is among the highest across the entire time period, indicating that the stocks in this sector have been consistently highly interacting with each other. Finally, all percentages increased after the financial crisis began and leveled off afterwards, reflecting the increased amount of interaction during the financial crisis.

Figure 3: Stock Price. Cross-sector interaction plots at 5 different time points based on loggle fitted graphs. Each node represents a sector and the edge width is proportional to the percentage of presence of the corresponding cross-sector edges.

In Figure 3, the graphs describe cross-sector interactions among the 5 GICS sectors at five different time points (before, at early stage, around the peak, at late stage and after the financial crisis). In these graphs, each node represents a sector and edge width is proportional to the respective percentage of presence of cross-sector edges (defined as the detected number of edges between two sectors divided by the total number of possible edges between these two sectors). Moreover, edges with cross-sector percentage less than are not displayed. We can see that there are more cross-sector interactions during the financial crisis, indicating higher degree of dependency among different sectors in that period. There are also some interesting observations with regard to how these sectors interact with one another and how such interactions change over time. For example, strong cross-sector interactions between the Financials sector and the Consumer Staples sector arose during the financial crisis despite of their weak relationship before and after the crisis. This is probably due to strong influence of the financial industry on the entire economy during financial crisis. Take the Consumer Discretionary sector and the Industrials sector as another example. These two sectors maintained a persistent relationship throughout the four years, indicating intrinsic connections between them irrespective of the financial landscape.

5 Conclusion

In this paper, we propose LOcal Group Graphical Lasso Estimationloggle, a novel model for estimating a sequence of time-varying graphs based on temporal observations. By using a local group-lasso type penalty, loggle imposes structural smoothness on the estimated graphs and consequently leads to more efficient use of the data as well as more interpretable graphs. Moreover, loggle can adapt to the local degrees of smoothness and sparsity of the underlying graphs in a data driven fashion and thus is effective under a wide range of scenarios. We develop a computationally efficient algorithm for loggle that utilizes the block-diagonal structure and pseudo-likelihood approximation. The effectiveness of loggle is demonstrated through simulation experiments. Moreover, by applying loggle to the S&P 500 stock price data, we obtain interpretable and insightful graphs about the dynamic interacting relationships among the stocks, particularly on how such relationships change in response to the recent global financial crisis. An R package loggle has been developed and will be available on


a.1 Proof of Theorem 2

By the KKT conditions, a necessary and sufficient set of conditions for being the minimizer of in (1) is:


where , and is a subgradient of :

If for , , where and consist of the variables in and respectively, then satisfies (A.1) iff , , satisfying such that

This is equivalent to

a.2 ADMM algorithm under pseudo-likelihood approximation

To solve the optimization problem in (4) using ADMM algorithm, we notice that the problem can be written as

where , and . Note and are vectors.

The scaled augmented Lagrangian is

where are dual variables ().

The ADMM algorithm is as follows. We first initialize , , . We also need to specify , which in practice is recommended to be (Wahlberg et al., 2012). For step until convergence:

  1. For ,

    The solution sets the derivative of the objective function to 0:

    It is easy to see that

    where is the kernel estimate of the covariance matrix as in Section 2.1. That is, is a block diagonal matrix with blocks, where the th block, the matrix , is the matrix with the th row and the th column deleted.


    That is, is a column vector consisting of sub-vectors, where the th sub-vector is the th column of with the th element (i.e., the diagonal element) deleted.

    Since and