A Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical Models

01/07/2015 ∙ by Jianqing Fan, et al. ∙ Princeton University Columbia University 0

Measuring conditional dependence is an important topic in statistics with broad applications including graphical models. Under a factor model setting, a new conditional dependence measure based on projection is proposed. The corresponding conditional independence test is developed with the asymptotic null distribution unveiled where the number of factors could be high-dimensional. It is also shown that the new test has control over the asymptotic significance level and can be calculated efficiently. A generic method for building dependency graphs without Gaussian assumption using the new test is elaborated. Numerical results and real data analysis show the superiority of the new method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Undirected graphical model is an important tool to capture dependence among random variables and has drawn tremendous attention in various fields including signal processing, bioinformatics and network modeling

(Wainwright and Jordan, 2008). Let be a

-dimensional random vector. We denote the undirected graph corresponding to

by , where vertices correspond to components of and edges indicate whether node and are conditionally independent given the remaining nodes. In particular, the edge is absent if and only if . When

follows multivariate Gaussian distribution with mean

and covariance matrix , the precision matrix captures exactly this relationship; that is, if and only if is absent (Lauritzen, 1996; Edwards, 2000)

. Therefore, under the Gaussian assumption, this problem reduces to the estimation of precision matrix, where a rich literature on model selection and parameter estimation can be found in both low-dimensional and high-dimensional settings, including

Dempster (1972), Drton and Perlman (2004), Meinshausen and Bühlmann (2006), Friedman et al. (2008), Fan et al. (2009), and Cai et al. (2011)

. While Gaussian graphical model (GGM) can be useful, the stringent requirement on normality is not always satisfied in real application where the observed data usually have fat tails or are skewed

(Xue and Zou, 2012).

To relax the Gaussian assumption, Liu et al. (2009) proposed the nonparanormal model, where they find transformations that marginally gaussianize the data and then work under the Gaussian graphical model framework to estimate the network structure. Under the nonparanormal model, Xue and Zou (2012) proposed rank-based estimators to approximate the precision matrix. The nonparanormal model, although flexible, still assume the transformed data follows a multivariate Gaussian distribution, which can also be restrictive at times. Instead of using these nonparametric methods to find transformations and work under GGM, we would like to propose a more natural way of constructing graphs. That is, we work directly on the conditional dependence structure by introducing a measure for conditional dependence between node and given the remaining nodes. Then, we can introduce a hypothesis testing procedure to decide whether the edge is present or not.

It is worth noting that, based on hypothesis testing, we could indeed build a general conditional dependency graph, where presence of an edge between nodes , represents that the two nodes are dependent conditional on some factors . Graphical model is one type of such graphs where is chosen to be ; in such cases, we call internal factors. More generally, could contain covariates we observe or latent factors that we do not observe, which are not necessarily part of , and we call them external factors. As an example, in Fama-French three-factor model, the return of each stock can be considered as one node and are the chosen three-factors. This example will be further elaborated in Section 5. Another interesting application is discussed in Stock and Watson (2002), where external factors are aggregated macroeconomic variables, and the nodes are disaggregated macroeconomic variables.

Back to hypothesis testing, in economics, there has been abundant literature on different conditional independence tests. Linton and Gozalo (1997) proposed two nonparametric tests of conditional independence based on a generalization of the empirical distribution function; however, a complicated bootstrap procedure is needed to calculate critical values of the test, which leads to limited practical value. Su and White (2007, 2008, 2014)

proposed conditional independence tests based on Hellinger distance, conditional characteristic function and empirical likelihood, respectively. However, all those tests either have tuning parameters or are computationally expensive.

As an attempt to propose a remedy for the above issues, we consider the following setup. In particular, suppose are i.i.d. realizations of , which are generated from the following model:

(1)

where is the -dimensional common factors, and are general mappings from to and , respectively. Note that we only observe . Here, we assume independence between and . In addition, the dimensions and are assumed to be fixed while the number of factors could diverge to infinity.

Our goal is to test whether and are independent given , i.e.,

(2)

Under model (1), the testing problem is equivalent to test whether and are independent, i.e.,

(3)

Note that in the case of building graphical models without Gaussian assumption, we could use (1) by setting and as the random vectors associated with a pair of nodes in the graph and represents the rest of the nodes. Then, the method to be developed could be used to construct a high-dimensional undirected graphs by conducting the test in (2) for each pair of the nodes. This gives a graphical summary of the conditional dependence structure.

Since are hidden, a natural idea is to estimate them by the residuals after a projection of and onto . Asymptotically, a fully nonparametric projection on (e.g., local polynomial regression) would consistently recover the random errors when is fixed along with certain smoothness assumptions on and . However, it becomes challenging when

diverges due to the curse of dimensionality if no structural assumptions are made on

and . As a result, we will study the case where and are linear functions (factor models) in Section 2.2 and the case where and are additive functions in Section 2.5 when diverges.

To complete our proposal, we need to find a suitable measure of dependence between random variables/vectors. In this regards, many different measures of dependence have been proposed. Some of them rely heavily on Gaussian assumptions, such as Pearson correlation, which measures linear dependence and the uncorrelatedness is equivalent to independence only when the joint distribution is Gaussian; or Wilks Lambda

(Wilks, 1935), where normality is adopted to calculate the likelihood ratio. To deal with non-linear dependence and non-Gaussian distribution, statisticians have proposed rank-based correlation measures, including Spearman’s and Kendall’s

, which are more robust than Pearson correlation against deviations from normality. However, these correlation measures are usually only effective for monotone types of dependence. In addition, under the null hypothesis that two variables are independent, no general statistical distribution of the coefficients associated with these measures has been derived. Other related works include

Hoeffding (1948), Blomqvist (1950), Blum et al. (1961), and some methods described in Hollander et al. (2013) and Anderson (1962). Taking these into consideration, distance covariance (Székely et al., 2007) was introduced to address all these deficiencies. The major benefits of distance covariance are: first, zero distance covariance implies independence, and hence it is a true dependence measure. Second, distance covariance can measure the dependence between any two vectors which potentially are of different dimensions. Due to these advantages, we will focus on distance covariance in this paper as our measure of dependence.

The main contribution of this paper is two-fold. First, under the factor model assumption, we propose a computationally efficient conditional independence test. Both the response vectors and the common factors can be of different dimensions and the number of the factors could grow to infinity with sample size. Second, we apply this test to build conditional dependency graph and covariates-adjusted dependency graph, as generalizations of the Gaussian graphical model.

The rest of this paper is organized as follows. In Section 2, we present our new procedure for testing conditional independence via projected distance covariance (P-DCov) and describe how to construct conditional dependency graphs based on the proposed test. Section 3

gives theoretical properties including the asymptotic distribution of the test statistic under the null hypothesis as well as the type I error guarantee. Section

4 contains extensive numerical studies and Section 5 demonstrates the performance of P-DCov via two real data sets. We conclude the paper with a short discussion in Section 6. Several technical lemmas and all proofs are relegated to the appendix.

2 Methods

First, we introduce some notations. For a random vector , and represent its Euclidean norm and norm, respectively. A collection of i.i.d. observations of is denoted as , where represents the -th observation. For any matrix , , and denote its Frobenius norm, operator norm and max norm, respectively. is the norm defined as the norm of the vector consisting of column-wise norm of .

2.1 A brief review of distance covariance

As an important tool, distance covariance is briefly reviewed in this section with further details available in Székely et al. (2007). We introduce several definitions as follows.

Definition 1

(-weighted norm) Let , for any positive integer , where is the Gamma function. Then for function defined on , the -weighted norm of is defined by

Definition 2

(Distance covariance) The distance covariance between random vectors and

with finite first moments is the nonnegative number

defined by

where , and represent the characteristic functions of , and the joint characteristic function of and , respectively.

Suppose we observe random sample from the joint distribution of . We denote and .

Definition 3

(Empirical distance covariance) The empirical distance covariance between samples and is the nonnegative random variable defined by

where

With above definitions, Lemma 1 depicts the consistency of as an estimator of . Lemma 2 shows the asymptotic distribution of under the null hypothesis that and are independent. Corollary 1 reveals properties of the test statistic proposed in Székely et al. (2007).

Lemma 1

(Theorem 2 in Székely et al. (2007)) Assume that , then almost surely

Lemma 2

(Theorem 5 in Székely et al. (2007)) Assume that and are independent, and , then as ,

where represents convergence in distribution and denotes a complex-valued centered Gaussian random process with covariance function

in which , .

Corollary 1

(Corollary 2 in Székely et al. (2007)) Assume that .

  1. If and are independent, then as , with , where and are non-negative constants depending on the distribution of ; .

  2. If and are dependent, then as , .

2.2 Conditional independence test via projected distance covariance (P-DCov)

Here, we consider the case where and are linear in (1), which leads to the following factor model setup:

(4)

where and are factor loading matrices of dimension and respectively, and is the -dimensional vector of common factors. Here, the number of common factors could grow to infinity and the matrices and are assumed to be sparse to reflect that and only depend on several important factors. As a result, we will impose regularization on the estimation of and . Now, we are in the position to propose a test for problem (2). We first provide an estimate for the idiosyncratic components and , and then calculate distance covariance between the estimates. More generally, we project and onto the space orthogonal to the linear space spanned by and evaluate the dependency between the projected vectors. The conditional independence test is summarized in the following steps.

Step 1: Estimate factor loading matrices and by the penalized least square (PLS) estimators and defined as follows.

(5)
(6)

where , , , is the penalty function with penalty level .

Step 2: Estimate the error vectors and by

Step 3: Define the estimated error matrices and . Calculate the empirical distance covariance between and as

Step 4: Define the P-DCov test statistic as .

Step 5: With predetermined significance level , we reject the null hypothesis when .

Theoretical properties of the proposed conditional independence test will be studied in Section 3. In the above method, we implicitly assume that the number of variables is large so that the penalized least-squares methods are used. When the number of variables is small, we can take so that no penalization is imposed.

2.3 Building graphs via conditional independence test

Now we explore a specific application of our conditional independence test to graphical models. To identify the conditional independence relationship in a graphical model, i.e., , we assume

(7)

where represents all coordinates of other than and , and and are dimensional regression coefficients. Under model (7), we decide whether edge will be drawn through directly testing , where is the linear space spanned by .

More specifically, for each node pair , we define using the same steps as in Section 2.2 as the test for the current null hypothesis:

(8)

We now summarize the testing results by a graph in which nodes represent variables in and the edge between node and node is drawn only when is rejected at level .

In (7), the factors are created internally via the observations on remaining nodes . In financial applications, it is often desirable to build graphs when conditioning on external factors. In such cases, it is straightforward to change the factors in (7) to external factors.

We will demonstrate the two different types of conditional dependency graphs via examples in Sections 4 and 5.

2.4 Graph estimation with FDR control

Through the graph building process described in Section 2.3, we can carry out P-DCov tests simultaneously and we wish to control the false discovery rate (FDR) at a pre-specified level . Let and be the number of falsely rejected hypotheses and the number of total rejections, respectively. The false discovery proportion (FDP) is defined as and the FDR is the expectation of FDP.

In the literature, various procedures have been proposed for conducting large-scale multiple hypothesis testing via FDR control. In this work, we will follow the most commonly used Benjamini and Hochberg (BH) procedure developed in the seminal work of Benjamini and Hochberg (1995), where P-values of all marginal tests are compared. More specifically, let be the ordered P-values of the hypotheses given in (8). Let , and we reject the hypotheses with the smallest P-values. We will demonstrate the performance of this strategy via real data examples.

2.5 Extension to functional projection

In the P-DCov described in Section 2.2, we assume the conditional dependency of and given factor is expressed via a linear form of . In other words, we are projecting and onto the space orthogonal to and evaluate the dependence between the projected vectors. Although this linear projection assumption makes the theoretical development easier and delivers the main idea of this work, a natural extension is to consider a nonlinear projection. In particular, we consider the following additive generalization (Stone, 1985) of the factor model setup:

(9)

where are unknown vector-valued functions we would like to estimate. In (9), we consider the additive space spanned by factor . By this extension, we could identify more general conditional dependency structures between and given . This is a special case of (1), but avoids the issue of curse of dimensionality.

In the high-dimensional setup where is large, we can use a penalized additive model (Ravikumar et al., 2009; Fan et al., 2011) to estimate the unknown functions. The conditional independence test described in Section 2.2

could be modified by replacing the linear regression with the (penalized) additive model regression. We will investigate the P-DCov method coupled with the sparse additive model

(Ravikumar et al., 2009) in numerical studies.

3 Theoretical Results

In this section, we derive the asymptotic properties of our conditional independence test. First, we introduce several assumptions on , and .

Condition 1

,  , ,  , .

Condition 2

We denote as the density function of random variable . Let us assume that the densities of and are bounded on , for some positive constant . In other words, there exists a positive constant ,

Conditions 1 and 2 impose mild moment and distributional assumptions on random errors and . To better understand when the proposed method works, we give the following high-level assumption, whose justifications are noted below.

Condition 3

There exist constants and , such that for any

, with probability greater than

, we have for any ,

where the sequence .

Condition 4

Let denote the -th row of , and similarly we define , and . We assume for any fixed ,

where sequences and in Condition 3 satisfy .

Remark 1

Conditions 3 and 4 are mild conditions that are imposed to ensure the quality of the projection and guarantee the theoretical properties regarding our conditional independence test. For example, one could directly call the results from penalized least squares for high-dimensional regression (Bühlmann and Van De Geer, 2011; Hastie et al., 2015) and robust estimation (Belloni et al., 2011; Wang, 2013; Fan et al., 2016b). We now discuss two special examples as follows.

  1. (

    is fixed) In this fixed dimensional case, it is straightforward to verify that the projection based on ordinary least squares satisfies the two conditions.

  2. (Sparse Linear Projection) Let and . Note that the graphical model case corresponds to . We apply the popular -regularized least squares for each dimension of regressing on the factor . Here, we further assume the true regression coefficient is sparse for each with , and . From Theorem 11.1, Example 11.1 and Theorem 11.3 in Hastie et al. (2015), and since are i.i.d., we have with high probability, , and . Then, we have with high probability, for each and ,

    (10)

    where . It is now easy to verify that Condition 3 and 4 are satisfied even under the ultra-high-dimensional case where . We would like to omit the details here for brevity about the specification of various constants.

Theorem 1

Under Conditions 1 and 3,

In particular, when and are independent, .

Theorem 1 shows that the sample distance covariance between the estimated residual vectors converges to the distance covariance between the population error vectors. It enables us to use the distance covariance of the estimated residual vectors to construct the conditional independence test as described in Section 2.2.

Theorem 2

Under Conditions 1-4, and the null hypothesis that (or equivalently ),

where is a zero-mean Gaussian process defined analogously as in Lemma 2.

Theorem 2 provides the asymptotic distribution of the test statistic under the null hypothesis, which is the basis of Theorem 3.

Corollary 2

Under the same conditions of Theorem 2,

where and are non-negative constants depending on the distribution of ; .

Theorem 3

Consider the test that rejects conditional independence when

(11)

where

is the cumulative distribution function of

. Let denote its associated type I error. Then under Conditions 1-4, for all ,

  1. ,

  2. .

Part (i) of Theorem 3 indicates the proposed test with critical region (11) has an asymptotic significance error at most . Part (ii) of Theorem 3 implies that there exists a pair such that the pre-specified significant level is achieved asymptotically. In other words, the size of testing is .

Remark 2

When the sample size is small, the theoretical critical value in (11) could sometimes be too conservative in practice (Székely et al., 2007). Therefore, we recommend using random permutation to get a reference distribution for the test statistic under . Random permutation is used to decouple and so that the resulting pair follows the null model, where are a random permutation of indices . Here, we set the number of permutations as in Székely et al. (2007)

. Consequently, we can also estimate the P-value associated with the conditional independence test based on the quantiles of the test statistics over

random permutations.

4 Monte Carlo Experiments

In this section, we investigate the performance of P-DCov with five simulation examples. In Example 4.1, we consider a factor model and test the conditional independence between two vectors and given their common factor , via P-DCov. In Examples 4.2, we investigate the classical Gaussian graphical model. In Example 4.3, we consider the case of general graphical model without the Gaussian assumption. In Examples 4.4 and 4.5, we consider the case of factor based dependency graph and a general graphical model with external factors, respectively.

Example 4.1

[High-dimensional factor model] Let , and . Assume the rows of and rows of are i.i.d. distributed as , where is a 3-dimensional vector with elements i.i.d. from and . are i.i.d. from . We generate copies

from log-normal distribution

where is an equal correlation matrix of size with when and . and are the centered version of the first coordinates and the last coordinates of . Then, and are generated according to and correspondingly.

In Example 4.1, we consider a high-dimensional factor model with sparsity structure. Note that the errors are generated from a heavy tail distribution to demonstrate the proposed test works beyond Gaussian errors. We assume each coordinate of and only depends on the first three factors. We calculate in the P-DCov test, and in which we replace and by the true and as an oracle test to compare with. To get reference distributions of and , we follow the permutation procedure as described in Section 3. In this example, we set the significance level . We vary the sample size from 100 to 200 with increment of 10 and show the empirical power based on 1000 repetitions for both and in Figure 1 for . In the implementation of penalized least squares in Step 1, we use R package with the default tuning parameter selection method (10-fold cross-validation) and perform least square on the selected variables to reduce estimation bias of these estimated parameters.

From Figure 1, it is clear that as the sample size or increases, the empirical power also increases in general. Also, comparing the panels (A) and (B) in Figure 1, we see that when the sample size is small, the P-DCov test has smaller power than the oracle test, however, the difference between them becomes negligible as the sample size increases. This is consistent with our theory regarding the asymptotic distribution of the test statistics. When , Table 1 reports the empirical type I error for both P-DCov as well as the oracle version. It is clear that the type I error of P-DCov is under good control as the sample size increases.

Test based on and
100 110 120 130 140 150 160 170 180 190 200
0.129 0.112 0.117 0.095 0.104 0.108 0.098 0.113 0.102 0.117 0.111
Test based on and
100 110 120 130 140 150 160 170 180 190 200
0.089 0.092 0.078 0.112 0.104 0.090 0.099 0.113 0.104 0.106 0.096
Table 1: Type I error of Example 1
(A) (B)
Figure 1: Power-sample size graph of Example 1
Example 4.2

[Gaussian graphical model] We consider a standard Gaussian graphical model with precision matrix , where is a tridiagonal matrix of size , and is associated with the autoregressive process of order one. We set and the -element in to be , where . In addition,

In this example, we would like to compare the proposed P-DCov with the state-of-the-art approaches for recovering Gaussian graphical models. In terms of recovering structure , we compare lasso.dcov (projection by LASSO followed by distance covariance), sam.dcov (projection by sparse additive model followed by distance covariance), lasso.pearson (projection by LASSO followed by Pearson correlation), sam.pearson (projection by sparse additive model followed by Pearson correlation) with three popular estimators corresponding to the LASSO, adaptive LASSO and SCAD penalized likelihoods (called graphical.lasso, graphical.alasso and graphical.scad on the graph) for the precision matrix (Friedman et al., 2008; Fan et al., 2009). Here, lasso.dcov and sam.dcov are two examples of our P-DCov methods. We use R package to fit the sparse additive model. To evaluate the performances, we construct receiver operating characteristic (ROC) curves for each method with sample sizes and . The process of constructing the ROC curves involves conducting the P-DCov test for each pair of nodes and record the corresponding P-values. In each of the ROC curve, true positive rates (TPR) are plotted against false positive rates (FPR) at various thresholds of those P-values (“TP” means the true entry of the precision matrix is nonzero and estimated as nonzero; “FP” means the true entry of the precision matrix is zero but estimated as nonzero.) We follow the implementation in Fan et al. (2009) for the three penalized likelihood estimators. The average results over 100 replications of different methods are reported in Figure 2. The associated AUC (Area Under the Curve) for each method is also displayed in the legend of the figure.

(A) (B)
Figure 2: ROC curves for Gaussian graphical models

We observe that lasso.pearson and sam.pearson perform similarly with the penalized likelihood methods when . On the other hand, lasso.dcov and sam.dcov lead to slightly smaller AUC value due to the use of the distance covariance, which is expected for the Gaussian model. This shows that we do not pay a big price for using the more complicated distance covariance and sparse additive model.

Example 4.3

[A general graphical model] We consider a general graphical model with a combination of multivariate distribution and multivariate Gaussian distribution. The dimension of is . In detail, where follows a 20 dimensional multivariate

distribution with degrees of freedom 5, location parameter

and identity covariance matrix and follows the same Gaussian graphical model as in Example 4.2 except the dimension is now 10. In addition, and are independent.

To generate a multivariate -distribution, we first generate a random vector from the standard multivariate Gaussian distribution and an independent random variable and then set . One important fact about the multivariate distribution is that the zero element in the precision matrix does not imply conditional independence like the case of Gaussian graphical models (Finegold and Drton, 2009). Indeed, for , we actually have the fact that and are dependent given for any pair . Consequently, the Gaussian likelihood based methods will falsely claim that all the components of are independent.

The average ROC curve results are rendered in Figure 3. As expected, by using the new projection based distance covariance method for testing conditional independence, lasso.dcov outperforms all the other methods in terms of AUC, with a more evident advantage when . One interesting observation is that: in the region where FPR is very low, the likelihood based methods actually outperform P-DCov methods. One possible reason is that the likelihood based methods are more capable of capturing the conditional dependency structure within as it follows a Gaussian graphical model.

(A) (B)
Figure 3: ROC curves for a general graphical model
Example 4.4

[Factor based dependency graph] We consider a dependency graph with the contribution of external factors. In particular, we generate , where is the same tridiagonal matrix used in Example 4.2 and , then the observation where is a sparse coefficient matrix that dictates how each dimension of depends on the factor . In particular, the generation of follows the setting in Cai et al. (2013). For each element

, we first generate a Bernoulli distribution with success probability 0.2 to determine whether

is 0 or not. If is not 0, we then generate . Here we consider two forms of , namely and .

We report results regarding the average ROC curves for lasso.pearson, lasso.dcov, sam.pearson and sam.dcov. The results for both and are depicted in Figure 4. Note that we are not building a conditional dependency graph among , but a dependency graph of conditioning on the external factor . There are some insightful observations from the figure. First of all, by looking at the first case when , it is clear that lasso.pearson is the best as it takes advantage of the sparse linear structure paired with the Gaussian distribution of the residual. By using the distance covariance as a dependency measure, or by using the sparse additive model as a projection method, it is reassuring that we do not lose much efficiency. Second, for the case when , we can see a substantial advantage of the sparse additive model based methods as it can capture this nonlinear contribution of the factors to the dependency structure of .

(A) , (B) ,
(C) , (D) ,
Figure 4: ROC curves for factor based dependency graph
Example 4.5

[A general graphical model with external factors] We consider a general conditional dependency graph with the contribution of external factors by combining the ingredients of Examples 4.3 and 4.4. In particular, we generate from Example 4.3 and , then set where is the same as Example 4.4. We also consider and .

In this example, we would like to investigate the performance of a two-step projection method. In particular, we first project onto the space spanned by and denote the residual by . Then we explore the conditional dependency structure of and given by projecting them onto the space orthogonal to the space (linearly or additively) spanned by . Here, we compare the performances of methods using the external factor and those that ignore them. The average ROC curves are rendered in Figure 5.

From the figure, we see that first of all, when

, the methods using external factors outperform their counterparts without using the information with the best method being lasso.dcov (lasso regression based projection coupled with distance covariance). Second, when we have nonlinear factors, using the factors do not necessarily help when we only consider linear projection. For example, the performances of lasso.pearson and lasso.pearson.f in panel (c) illustrates this point. On the other hand, by using sparse additive model based projection, we have a substantial gain over all the remaining methods especially for

.

(A) , (B) ,
(C) , (D) ,
Figure 5: ROC curves for a general graphical model with external factors

5 Real Data Analysis

5.1 Financial Data

In the first empirical example, we consider the Fama-French three-factor model (Fama and French, 1993). We collect daily excess returns of 90 stocks among the S&P 100 index, which are available between August 19, 2004 and August 19, 2005. We chose the starting date as Google’s Initial Public Offering date, and consider one year of daily excess returns since then. In particular, we consider the following three-factor model

for and . At time , represents the return for stock , is the risk-free return rate, and MKT, SMB and HML constitute market, size and value factors.

We perform P-DCov test with FDR control on all pairs of stocks and study the dependence between stocks conditional on the Fama-French three-factors. Under significance level , we found out that 15.46% of the pairs of stocks are conditionally dependent given the three factors, which implies that the three factors may not be sufficient to explain the dependencies among stocks. As a comparison, we also implemented the conditional independence test with the distance covariance based test replaced by Pearson correlation based test. It turns out the 9.34% of the pairs are significant under the same significance level. This shows the P-DCov test is more powerful in discovering significant conditionally dependent pairs than the Pearson correlation test.

We then investigate the top 5 pairs of stocks that correspond to the largest test statistic values using the P-DCov test. They are (BHI, SLB), (CVX, XOM), (HAL, SLB), (COP, CVX), and (BHI, HAL). Interestingly, all six stocks involved are closely related to the oil industry. This reveals the high level of dependence among oil industry stocks that cannot be well explained by the Fama-French three-factor model. In addition, we examine the stock pairs that are conditionally dependent under the P-DCov test but not under the Pearson correlation test. The two most significant pairs are (C, USB) and (MRK, PFE). The first pair are in the financial industry (Citigroup and U.S. Bancorp) and the second pair are pharmaceutical companies (Merck & Co. and Pfizer). This shows that by using the proposed P-DCov, some interesting conditional dependence structures could be recovered. This is consistent with the findings that the sector correlations are still present even after adjusting Fama-French factors and 10 industrial factors (Fan et al., 2016a).

5.2 Breast Cancer Data

In this section, we explore the difference in genetic networks between breast cancer patients who achieve pathologic Complete Response (pCR) and patients who do not achieve pCR. Achieving pCR is defined as no invasive and no in situ residuals left in breast in the surgical specimen. As studied in Kuerer et al. (1999) and Kaufmann et al. (2006), pCR has predicted long-term outcome in several neoadjuvant studies and hence serves as a potential surrogate marker for survival. In this study, we use the normalized gene expression data of 130 patients with stages I-III breast cancers analyzed by Hess et al. (2006). Among the 130 patients, 33 of them achieved pCR (class 1), while the other 97 patients did not achieve pCR (class 2). To construct the conditional dependence network for each class, we first perform a two-sample -test between the two groups and select the 100 genes with the smallest -values. Afterwards, we construct networks of these 100 selected genes for each class using P-DCov with the FDR control at level . Notice, in this case, and the corresponding sample sizes in two groups are and respectively.

In networks, the degree of a particular node describes how many edges are connected to this node, and the average degree serves as a measure of connectivity of the graphs. In Figure 6, we summarize the distribution of degrees for the genetic networks of class 1 and class 2 respectively. We see that the average degree of genetic network for class 1 is 9.7 which is much smaller than the average degree of network for class 2, which is 44.88. To look at the networks more closely, we select 7 genes among the 100 genes and draw the corresponding networks in Figure 7. We see that for Class 1 where pCR is achieved, gene MCCC2 is a hub and is connected with three other genes. However, in the network for Class 2, gene MCCC2 is disconnected from the other six genes. On the other hand, gene MAPT is isolated in the network for Class 1, but is connected with two other genes in class 2.

These findings imply that the two classes may have very different conditional dependence structures, and hence likely to have different precision matrices. As a result, when classification is the target, linear discriminant analysis may be too simple to capture the actual decision boundary.

(A) Class 1 (B) Class 2
Figure 6: Degree distribution of the genetic networks
(A) Class 1 (B) Class 2
Figure 7: Genetic networks for the two classes based on 7 selected genes

6 Discussion

In this work, we proposed a general framework for testing conditional independence via projection and showed a new way to create dependency graphs. The current theoretical results assume that contribution of factors is sparse linear. How to extend the theory to the case of sparse additive model projection would be an interesting future work. Another interesting direction is to use the proposed test to create dependency graphs among groups of nodes, which could have applications in genetics.

An R package for implementing the proposed methodology is available on CRAN.

Appendix A Proofs

Lemma 3

Under Condition 3, we have and .

From Condition 3, it is obvious that . Let and . Then, we have

As a result, the lemma is proved.

For the remaining proofs, we apply Taylor expansion to