Sparse Ising Models with Covariates

09/27/2012
by   Jie Cheng, et al.
0

There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use L1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/21/2021

A network Poisson model for weighted directed networks with covariates

The edges in networks are not only binary, either present or absent, but...
06/07/2019

Multivariate Conditional Transformation Models

Regression models describing the joint distribution of multivariate resp...
11/23/2020

Conditional canonical correlation estimation based on covariates with random forests

Investigating the relationships between two sets of variables helps to u...
03/13/2018

A machine learning-based approach for estimating and testing associations with multivariate outcomes

We propose a method for summarizing the strength of association between ...
01/23/2021

Bayesian Edge Regression in Undirected Graphical Models to Characterize Interpatient Heterogeneity in Cancer

Graphical models are commonly used to discover associations within gene ...
05/22/2020

Deep covariate-learning: optimising information extraction from terrain texture for geostatistical modelling applications

Where data is available, it is desirable in geostatistical modelling to ...
09/21/2020

Modeling Score Distributions and Continuous Covariates: A Bayesian Approach

Computer Vision practitioners must thoroughly understand their model's p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Markov networks have been applied in a wide range of scientific and engineering problems to infer the local conditional dependency of the variables. Examples include gene association studies (Peng et al., 2009; Wang et al., 2011), image processing (Hassner & Sklansky, 1980; Woods, 1978)

, and natural language processing

(Manning & Schutze, 1999). A pairwise Markov network can be represented by an undirected graph , where

is the node set representing the collection of random variables, and

is the edge set where the existence of an edge is equivalent to the conditional dependency between the corresponding pair of variables, given the rest of the graph.

Previous studies have focused on the case where an i.i.d. sample is drawn from an underlying Markov network, and the goal is to recover the graph structure, i.e., the edge set , from the data. Two types of graphical models have been studied extensively: the multivariate Gaussian model for continuous data, and the Ising model (Ising, 1925) for binary data. In the multivariate Gaussian case, the graph structure

is completely specified by the off-diagonal elements of the inverse covariance matrix, also known as the precision matrix. Therefore, estimating the edge set

is equivalent to identifying the non-zero off-diagonal entries of the precision matrix. Many papers on estimating the inverse covariance matrix have appeared in recent years, with a focus on the high-dimensional framework, for example, Meinshausen & Bühlmann (2006); Yuan & Lin (2007); Rothman et al. (2008); d’Aspremont et al. (2008); Rocha et al. (2008); Ravikumar et al. (2008); Lam & Fan (2009); Peng et al. (2009); Yuan (2010); Cai et al. (2011b). Most of these papers focus on penalized likelihood methods, and many establish asymptotic properties such as consistency and sparsistency. Many have also proposed fast computational algorithms, the most popular of which is perhaps glasso by Friedman et al. (2008), which was recently improved further by Witten et al. (2011) and Mazumder & Hastie (2012).

In the Ising model, the network structure can be identified from the coefficients of the interaction terms in the probability mass function. The problem is, however, considerably more difficult due to the intractable normalizing constant, which makes the penalized likelihood methods popular for the Gaussian case extremely computationally demanding.

Ravikumar et al. (2010) proposed an approach in the spirit of Meinshausen & Bühlmann (2006)’s work for the Gaussian case, fitting separate

-penalized logistic regressions for each node to infer the graph structure. A pseudo-likelihood based algorithm was developed by

Höfling & Tibshirani (2009) and analyzed by Guo et al. (2010c).

The existing literature mostly assumes that the data are an i.i.d. sample from one underlying graphical model, although the case of data sampled from several related graphical models on the same nodes has been studied both for the Gaussian and binary cases Guo et al. (2010b, a)

. However, in many real-life situations, the structure of the network may further depend on other extraneous factors available to us in the form of explanatory variables or covariates, which result in subject-specific graphical models. For example, in genetic studies, deletion of tumor suppressor genes plays a crucial role in tumor initiation and development. Since genes function through complicated regulatory relationships, it is of interest to characterize the associations among various deletion events in tumor samples. However, in practice we observe not only the deletion events, but also various clinical phenotypes for each subject, such as tumor category, mutation status, and so on. These additional factors may influence the regulatory relationships, and thus should be included in the model. Motivated by situations like this, here we propose a model for the conditional distribution of binary network data given covariates, which naturally incorporates covariate information into the Ising model, allowing the strength of the connection to depend on the covariates. With high-dimensional data in mind, we impose sparsity in the model, both in the network structure and in covariate effects. This allows us to select important covariates that have influence on the network structure.

There have been a few recent papers on graphical models that incorporate covariates, but they do so in ways quite different from ours. Yin & Li (2011) and Cai et al. (2011a) proposed to use conditional Gaussian graphical models to fit the eQTL (gene expression quantitative loci) data, but only the mean is modeled as a function of covariates, and the network remains fixed across different subjects. Liu et al. (2010) proposed a graph-valued regression, which partitions the covariate space and fits separate Gaussian graphical models for each region using glasso. This model does result in different networks for different subjects, but lacks interpretation of the relationship between covariates and the graphical model. Further, there is a concern about stability, since the so built graphical models for nearby regions of the covariates are not necessarily similar. In our model, covariates are incorporated directly into the conditional Ising model, which leads to straightforward interpretation and “continuity” of the graphs as a function of the covariates, since in our model it is the strength of the edges rather than the edges themselves that change from subject to subject.

The rest of the paper is organized as follows. In Section 2, we describe the conditional Ising model with covariates, and two estimation procedures for fitting it. Section 3 establishes asymptotic properties of the proposed estimation method. We evaluate the performance of our method on simulated data in Section 4, and apply it to a dataset on genomic instability in breast cancer samples in Section 5. Section 6 concludes with a summary and discussion.

2 Conditional Ising model with covariates

2.1 Model set-up

We start from a brief review of the Ising model, originally proposed in statistical physics by Ising (1925). Let

denote a binary random vector. The Ising model specifies the probability mass function

as

where is a -dimensional parameter vector and is the partition function ensuring the probabilities summing up to 1. Note that from now on we assume equals to unless otherwise specified. The Markov property is related to the parameter via

(1)

i.e., and are independent given all other ’s if and only if .

Now suppose we have additional covariate information, and the data are a sample of i.i.d. points with and . We assume that given covariates , the binary response follows the Ising distribution given by

(2)

We note that for any covariates , the conditional Ising model is fully specified by the vector , and by setting for all , the functions

can be connected to conditional log-odds in the following way,

(3)

where, . Further, conditioning on being 0, we also have

Similarly to (1), this implies and are conditionally independent given covariates and all other ’s if and only if .

A natural way to model is to parametrize it as a linear function of . Specifically, for , we let

The model can be expressed in terms of the parameter vector as follows:

(4)

Instead of (3), we now have the log-odds that depend on the covariates, through

(5)

The choice of linear parametrization for has several advantages. First, (5) mirrors the logistic regression model when viewing the ’s, ’s and ’s () as predictors. Thus the model has the same interpretation as the logistic regression model, where each parameter describes the size of the conditional contribution of that particular predictor. Second, this parametrization has a straightforward relationship to the Markov network. One can tell which edges exist and on which covariates they depend by simply looking at . Specifically, the vector being zero implies that and are conditionally independent given any and the rest of ’s, and being zero implies that the conditional association between and does not depend on . Third, the continuity of linear functions ensures the similarity among the conditional models for similar covariates, which is a desirable property. Finally, the linear formulation promises the convexity of the negative log-likelihood function, allowing efficient algorithms for fitting the model discussed next.

2.2 Fitting the model

The probability model in (4) includes the partition function , which requires summation of terms for each data point and makes it intractable to directly maximize the joint conditional likelihood . However, (5) suggests we can use logistic regression to estimate the parameters, an approach in the spirit of Ravikumar et al. (2010). The idea is essentially to maximize the conditional log-likelihood of given and rather than the joint log-likelihood of .

Specifically, the negative conditional log-likelihood for can be written as follows

(6)

where

Note that this conditional log-likelihood involves the parameter vector only through its subvector , thus we sometimes write when the rest of is not relevant.

There are parameters to be estimated, so even for moderate and the dimension of can be large. For example, with and , the model has 605 parameters. Thus there is a need to regularize . Empirical studies of networks as well as the need for interpretation suggest that a good estimate of should be sparse. Thus we adopt the regularization to encourage sparsity, and propose two approaches to maximize the conditional likelihood (6).

Separate regularized logistic regressions

The first approach is to estimate each , separately using the following criterion,

where , that is, we do not penalize the intercept term .

In this approach, and are estimated from the th and th regressions, respectively, thus the symmetry is not guaranteed. To enforce the symmetry in the final estimate, we post-process the estimates following Meinshausen & Bühlmann (2006), where the initial estimates are combined by comparing their magnitudes. Specifically, let denote the final estimate and denote the initial estimate from the separate regularized logistic regressions. Then for any and any , we can use one of the two symmetrizing approaches:

separate-max:
separate-min:

The separate-min approach is always more conservative than separate-max in the sense that the former provides more zero estimates. It turns out that when the sample size is small, the separate-min approach is often too conservative to effectively identify non-zero parameters. More details are given in Section 4.

Joint regularized logistic regression

The second approach is to estimate the entire vector simultaneously instead of estimating the ’s separately, using the criterion,

where . The joint approach criterion can be written as one large penalized logistic regression by careful rearranging of terms. One obvious benefit of the joint approach is that can be automatically symmetrized by treating and as the same during estimation. The price, however, is that it is computationally much less efficient than the separate approach.

To fit the model using either the separate or the joint approach, we adopt the coordinate shooting algorithm in Fu (1998), where we update one parameter at a time and iterate until convergence. The implementation is similar to the glmnet algorithm of Friedman et al. (2010), and we omit the details here.

3 Asymptotics: consistency of model selection

In this section we present the model selection consistency property for the separate regularized logistic regression. Results for the joint approach can be derived in the same fashion by treating the joint regression as a single large logistic regression. The spirit of the proof is similar to Ravikumar et al. (2010), but since their model does not include covariates , both our assumptions and conclusions are different.

In this analysis, we treat the covariates ’s as random vectors. With a slight change of notation, we now use to denote , dropping the intercept which is irrelevant for model selection. The true parameter is denoted by . Without loss of generality we assume that , and we also assume that .

First, we introduce additional notation to be used throughout this section. Let

(7)
(8)
(9)

where

Let denote the index set of the non-zero elements of , and let be the submatrix of indexed by . Similarly defined are and , where is the compliment set of . Moreover, for any matrix , let be the matrix norm, and let and

be the minimum and maximum eigenvalues of

, respectively.

For our main results to hold, we make the following two assumptions for all logistic regressions.

A1

There exists a constant , such that

A2

There exist constants and , such that

These assumptions bound the correlation among the effective covariates, and the amount of dependence between the group of effective covariates and the rest. Under these assumptions, we have the following result:

Theorem 1

For any , let be a solution of the problem

(10)

Assume and hold for and , and further assume that for some

(11)

Let and a constant independent of . If

(12)
(13)
(14)

the following hold with probability at least ( is a constant in (0, 1)),

  1. Uniqueness: is the unique optimal solution for any .

  2. consistency: for any

  3. Sign consistency: correctly identifies all the zeros in for any ; moreover, identifies the correct sign of non-zeros in whose absolute value is at least .

Theorem 1 establishes the consistency of model selection allowing both of the dimensions and to grow to infinity with . The extra condition, which requires the distribution of to have a fast decay on large values, was not in Ravikumar et al. (2010)

as the paper does not consider covariates. The new condition is, however, quite general; for example, it is satisfied by the Gaussian distribution and all categorical covariates. The proof of the theorem can be found in the Appendix.

4 Empirical performance evaluation

In this section, we present three sets of simulation studies designed to test the model selection performance of our methods. We vary different aspects of the model, including sparsity, signal strength and proportion of relevant covariates. The results are presented in the form of ROC curves, where the rate of estimated true non-zero parameters (sensitivity) is plotted against the rate of estimated false non-zero parameters (1-specificity) across a fine grid of the regularization parameter. Each curve is smoothed over 20 replications.

The data generation scheme is as follows. For each simulation, we fix the dimension of the covariates , the dimension of the response , the sample size and a graph structure in the form of a adjacency matrix (randomly generated scale-free networks (Barabasi & Albert, 1999). For any , , consists of independently generated and selected from three possible values: (with probability ), (with probability ), and 0 (with probability ). An exception is made for the intercept terms , where is always set to 1. Covariates ’s are generated independently from the multivariate Gaussian distribution . Given each and , we use Gibbs sampling to generate the , where we iteratively generate a sequence of ’s

from a Bernoulli distribution with probability

and take the last value of the sequence when a stopping criterion is satisfied.

We compared three estimation methods: the separate-min method, the separate-max method and the joint method. Our simulation results indicate that performance of the separate-min method is substantially inferior to that of the separate-max method in almost all cases (results omitted for lack of space). Thus we only present results for the separate-max and the joint methods in this section.

4.1 Effect of sparsity

First, we investigate how the selection performance is affected by the sparsity of the true model. The sparsity of can be controlled by two factors: the number of edges in , denoted by , and the average proportion of effective covariates for each edge, . We fix the dimensions , and the sample size , and set the signal size to . Under this setting, the total number of parameters is 1155. The sparsity parameter takes values in the set , and takes values in .

Figure 1: ROC curves for varying levels of sparsity, as measured by the number of edges () and expected proportion of non-zero covariates (). The star on each curve corresponds to an optimal value of selected on an independent validation set.

The resulting ROC curves are shown in Figure 1. The first row shows the results of the joint approach and the second row of the separate-max approach. As the true model becomes less sparse, the performance of both the joint and the separate methods deteriorates, since sparse models have the smallest effective number of parameters to estimate and benefit the most from penalization. Note that the model selection performance seems to depend on the total number of non-zero parameters (), not just on the number of edges (). For example, both approaches perform better in case than , even though the former has a more complicated network structure. Comparing the separate-max method and the joint method, we observe that the two methods are quite comparable, with the joint method being slightly less sensitive to increasing the number of edges.

Note that the “” point on each curve represents the average sensitivity and (1-specificity) over the replications based on an “optimal” , selected by maximizing the conditional log-likelihood on an independent validation dataset of the same size as the training data.

4.2 Effect of signal size

Second, we assess the effect of signal size. The dimensions are set to be the same as in the previous simulation, that is, , and , and underlying network is the same. The expected proportion of effective covariates for each edge is . The signal strength parameter takes values in the set . For each setting, the non-zero entries of the parameter vectors are at the same positions with the same signs, only differing in magnitude. The resulting ROC curves are shown in Figure 2.

Figure 2: ROC curves for varying levels of signal strength, as measured by the parameter . The star on each curve corresponds to an optimal value of selected on an independent validation set.

As the signal strength increases, both the separate and the joint methods show improved selection performance, but the improvement levels off eventually. Both methods achieve almost the same “optimal” sensitivity and specificity (the ’’ point), with the separate-max method performing better overall.

4.3 Effect of noise covariates

In the last set of simulations, we study how the model selection performance is affected by adding extra uninformative covariates. At the same time, we also investigate the effect of the number of relevant covariates and the sample size . The dimension of the response is fixed to be and the network structure remains the same as in the previous simulation. We take and . For each combination, we first fit the model on the original data and then on augmented data with extra uninformative covariates added. The total number of covariates . The non-zero parameters are generated the same way as before with and . With the changes in , the total number of non-zero parameters remains fixed for each value of , while the total number of zeros is increasing.

To make the results more comparable across setting, we plot the counts rather than rates of true positives and false positives. The resulting curves are shown in Figure 3. Generally, performance improves when the sample size grows and deteriorates when the number of noise covariates increases, particularly with a smaller sample size. The separate-max method dominates the joint method under these settings, but the difference is not large.

Figure 3: ROC curves for varying dimension, number of noise covariates, and sample size.

.

5 Application to tumor suppressor genes study

In breast cancer, deletion of tumor suppressor genes plays a crucial role in tumor initiation and development. Since genes function through complicated regulatory relationships, it is of interest to characterize the associations among various deletion events in tumor samples, and at the same time to investigate how these association patterns may vary across different tumor subtypes or stages.

Our data set includes DNA copy number profiles from cDNA microarray experiments on 143 breast cancer specimens (Bergamaschi et al., 2006). Among them, 88 samples are from a cohort of Norwegian patients with locally advanced (T3/T4 and/or N2) breast cancer, receiving doxorubicin (Doxo) or 5 fluorouracil/mitomycin C (FUMI) neoadjuvant therapy (Geisler et al., 2003). The samples were collected before the therapy. The other 55 are from another cohort of Norwegian patients from a population-based series (Zhao et al., 2004). Each copy number profile reports the DNA amounts of 39,632 probes in the sample. The array data was preprocessed and copy number gain/loss events were inferred as described in Bergamaschi et al. (2006). To reduce the spatial correlation in the data, we bin the probes by cytogenetic bands (cytobands). For each sample, we define the deletion status of a cytoband to be 1 if at least three probes in this cytoband show copy number loss. 430 cytobands covered by these probes show deletion frequencies greater than 10% in this group of patients, and they were retained for the subsequent analysis. The average deletion rate for all the 430 cytobands in 143 samples is

. Our goal is to uncover the association among these cytoband-deletion events and how the association patterns may change with different clinical characteristics, including TP53 mutation status (a binary variable), estrogen receptors (ER) status (a binary variable), and tumor stage (an ordinal variable taking values in

).

For our analysis, denote the array data by , where indicates the deletion status of the cytoband in the sample. Let denote the covariate vector containing the three clinical phenotypes of the sample, and the th covariate vector. We first standardize the covariate matrix and then fit our Ising model with covariates with the separate-max fitting method. We then apply stability selection (Meinshausen & Bühlmann, 2010) to infer the stable set of important covariates for each pairwise conditional association. Specifically, we repeatedly fit the model 100 times on subsamples containing half the data selected randomly without replacement. For each tuning parameter from a fixed grid of values, we record the frequency of being non-zero respectively for each covariate , on all pairs of , , and denote it by . Note that corresponds to the main effect interaction between a pair of ’s and does not involve any covariates. Then we use as a measure of importance of covariate for the edge . Finally, for each covariate , we rank the edges based on the selection frequencies . At the top of the list are the edges that depend on most heavily. We are primarily interested in the pairs of genes belonging to different chromosomes, as the interaction between genes located on the same chromosome is more likely explained by strong local dependency. The results are shown in Table 1, where the rank list of the edges depending on different covariates are recorded. The first two columns of each covariate related columns are the node names and the third columns record the selection frequency.

There are 332 inter-chromosome interactions (between cytobands from different chromosomes) with selection probabilities at least 0.5. Among these, 39 interactions change with the TP53 status; 12 change with the ER status; and another 12 change with the tumor grade (see details in Table 1). These results can be used by biologists to generate hypotheses and design relevant experiments to better understand the molecular mechanism of breast cancer.

The most frequently selected pairwise conditional association is between deletion on cytoband 4q31.3 and deletion on 18q23 (94% selection frequency). Cytoband 4q31.3 harbors the tumor suppressor candidate gene SCFFbw7, which works cooperatively with gene TP53 to restrain cyclin E-associated genome instability (Minella et al., 2007). Previous studies also support the existence of putative tumor suppressor loci at cytoband 18q23 distal to the known tumor suppressor genes SMAD4, SMAD2 and DCC (Huang et al., 1995; Lassus et al., 2001). Thus the association between the deletion events on these two cytobands is intriguing.

Another interesting finding is that the association between deletion on cytoband 9q22.3 region and cytoband 12p13.31 appears to be stronger in the TP53 positive group than in the TP53 negative group. A variety of chromosomal aberrations at 9p22.3 have been found in different malignancies including breast cancer (Mitelman et al., 1997). This region contains several putative tumor suppressor genes (TSG), including DNA-damage repair genes like FANCC and XPA. Alterations in these TSGs have been reported to be associated with poor patient survival (Sinha et al., 2008). On the other hand, cytoband 12p13.31 harbors another TSG, namely ING4 (inhibitor of growth family member 4), whose protein binds TP53 and contributes to the TP53-dependent regulatory pathway. A recent study also suggests involvement of ING4 deletion in the pathogenesis of HER2-positive breast cancer. In light of these previous findings, it is interesting that our analysis also found the association between the deletion events of 9p22.3 and 12p13.31, as well as the changing pattern of the association under different TP53 status. This result suggests potential cooperative roles for multiple tumor suppressor genes in cancer initiation and progression.

Main effect TP53 mutation status ER status
Gene1 Gene2 Freq Gene1 Gene2 Freq Gene1 Gene2 Freq
4q31.3 18q23 0.95 3p22.2 22q13.1 0.79 3q26.1 11p14.3 0.69
2p25.2 15q26.2 0.87 3p12.3 12p13.1 0.72 4q34.3 5q32 0.64
2q36.3 3p26.1 0.84 12q22 15q14 0.7 8p11.22 11p14.2 0.63
7q21.13 8q21.13 0.84 2p12 Xp22.33 0.69 3q24 22q11.23 0.57
6p21.32 16q12.2 0.83 6p21.32 8p11.22 0.68 4p14 11p15.3 0.55
3p21.1 17p13.2 0.81 1p34.2 3p24.1 0.67 1q31.1 Xq27.3 0.54
4q24 12q21.1 0.81 2p21 Xp11.22 0.67 13q33.2 22q11.23 0.54
2q23.3 6p12.1 0.79 2p12 7p21.1 0.66 21q21.1 22q11.21 0.54
8p21.3 21q21.1 0.79 12q15 13q12.12 0.63 5q33.1 17q21.31 0.53
2q34 3q13.31 0.78 4q25 8p11.22 0.62 12q21.32 18q22.3 0.51
6p21.32 9q31.3 0.78 8p11.22 Xq23 0.62 8p11.22 22q11.21 0.5
6p21.32 13q21.1 0.78 9p21.2 16q22.1 0.61 8q21.13 Xp22.11 0.5
6p21.31 11p15.2 0.78 3p21.1 11q14.1 0.58
11p15.1 14q22.2 0.78 3p13 9p24.2 0.58
1p36.11 2p21 0.77 9q22.32 12p13.31 0.57
1p31.1 2q32.2 0.76 7q21.3 22q12.3 0.56 Tumor stage
1q31.1 22q11.21 0.76 3q26.1 11p13 0.55 Gene1 Gene2 Freq
2q32.1 6q14.1 0.76 4q35.2 22q12.3 0.55 16q23.3 17p13.1 0.61
9q21.11 16q21 0.76 15q22.33 17p11.2 0.55 12p11.23 16q12.2 0.59
9q31.3 14q24.3 0.76 3p22.1 6p21.31 0.54 3q13.13 Xq23 0.57
10q25.3 12p13.31 0.76 4q28.2 7q21.13 0.54 7p21.3 12p11.23 0.56
4q35.1 15q22.2 0.75 5q13.1 6q22.33 0.54 9q34.13 15q21.1 0.55
3p21.31 17p11.2 0.74 5q23.2 8p21.2 0.54 11q24.2 13q32.3 0.55
6p21.32 13q31.2 0.74 16q22.1 17q21.31 0.54 8q21.13 13q33.1 0.54
10q11.21 12p13.32 0.74 4q28.3 9p21.3 0.53 2p21 12p13.31 0.53
9q33.1 14q12 0.73 4q35.1 9p21.3 0.53 10q26.3 17p11.2 0.53
12p13.31 17q11.2 0.73 4q35.2 16q22.1 0.53 7p21.3 12p12.1 0.51
1p34.2 3p22.1 0.72 2q31.3 4q13.2 0.52 3q13.13 7p21.3 0.5
5q33.1 11p15.4 0.72 3p26.1 14q13.1 0.52 9q34.13 15q22.1 0.5
6q12 20p12.1 0.72 4p16.1 13q31.1 0.52
12p12.2 Xp11.4 0.72 6p21.31 11q14.2 0.52
4q35.2 9p21.2 0.71 3p25.1 11p15.2 0.51
11p15.2 18q12.1 0.71 5q14.2 Xq27.1 0.51
1p21.1 7q21.12 0.7 5q14.2 Xq27.2 0.51
2p16.1 6p12.3 0.7 8p11.22 15q14 0.51
2q31.2 3p26.2 0.7 10q23.32 21q21.1 0.51
2q36.3 9q22.31 0.7 16q22.1 17p13.2 0.51
3p22.1 15q25.3 0.7 3p22.1 5q33.3 0.5
6p21.32 Xp11.4 0.7 5q14.2 17q21.2 0.5
Table 1: Frequency-based ranked list of covariate-dependent inter-chromosomal interactions

We also searched the network for hubs (highly connected nodes), which often have important roles in genetic regulatory pathways. Since there can be different hubs associated with different covariates, we separate them as follows. For each node , covariate , and stability selection subsample , let the “covariate-specific” degree of node be . A ranking of nodes can then be produced for each covariate and each replication , with being the corresponding rank. Finally, we compute the median rank across all stability selection subsamples , and order nodes by rank for each covariate. The results are listed in Table 2. Interestingly, cytoband 8p11.22 was ranked close to the top for all three covariates. The 8p11-p12 genomic region plays an important roles in breast cancer, as numerous studies have identified this region as the location of multiple oncogenes and tumor suppressor genes (Yang et al., 2006; Adelaide et al., 1998). High frequency of loss of heterozygosity (LOH) of this region in breast cancer has also been reported (Adelaide et al., 1998). Particularly, cytoband 8p11.22 harbors the candidate tumor suppressor gene TACC1 (transforming, acidic coiled-coil containing protein 1), whose alteration is believed to disturb important regulations and participate in breast carcinogenesis (Conte et al., 2002). From Table 1, we can also see that the deletion of cytoband 8p11.22 region is associated with the deletion of cytoband 6p21.32 and 11p14.2 with relatively high confidence (selection frequency 0.6); and these associations change with both TP53 status and ER status. This finding is interesting because high frequency LOH at 6q and 11p in breast cancer cells are among the earliest findings that led to the discovery of recessive tumor suppressor genes of breast cancer (Ali et al., 1987; Devilce et al., 1991; Negrini et al., 1994). Moreover, there is evidence that allele loss of c-Ha-ras locus at 11p14 correlates with paucity of oestrogen receptor protein, as well as patient survival (MacKay et al., 1988; Garcia et al., 1989). These results together with the associations we detected confirm the likely cooperative roles of multiple tumor suppressor genes involved in breast cancer.

Main effect TP53 mutation status ER status Tumor stage
Gene Median rank Gene Median rank Gene Median rank Gene Median rank
1p36.11 16.75 8p11.22 12.75 3q26.1 10 16q23.1 19.25
1q31.1 21 1p31.3 14.5 1q31.1 12 10q11.23 22.25
6p21.31 24.25 3p22.2 25.25 3p22.2 13 16q12.2 23.5
6p21.32 37 1q31.1 28.75 8q21.13 14 9q34.13 27.5
2p12 38.5 12q23.1 32 10q22.1 15.25 22q11.23 27.75
2q32.2 43 2p16.2 33.5 8p11.22 19 12p11.23 33
8q21.13 44.5 4q31.1 41.75 3p21.1 20.25 2q33.1 35.25
6p12.3 45.5 9p21.3 42 11q23.3 22 8p11.22 35.75
2q32.3 53.75 7q21.3 44.25 5q13.1 28 10q25.2 36
3p22.2 54.25 3q26.1 44.75 4p16.1 33 11q14.1 40.5
6p12.1 57.5 12q15 45.5 5q13.3 34 10p12.2 41.5
1p31.3 59.25 12p11.22 51.5 9p22.3 36.25 3q13.13 42
21q21.1 60 15q22.1 51.5 8p21.3 41.25 13q13.2 42.75
3q26.1 73.25 15q23 51.75 3p25.1 42.5 16q12.1 47
12p11.22 73.25 8q21.13 54 10q23.2 42.75 6p21.31 50
6q26 74.5 9p21.2 54.5 5q32 47 11q22.2 53
13q32.1 75.75 21q21.1 55.25 1p36.11 47.5 10q26.3 53.5
17p13.2 78 9q34.13 59 Xp22.22 48.75 9q33.1 55.5
11q14.1 80.25 9p24.2 62 21q21.1 49 4q21.1 56
Table 2: Degree-based ranking of nodes

6 Summary and Discussion

We have proposed a novel Ising graphical model which allows us to incorporate extraneous factors into the graphical model in the form of covariates. Including covariates into the model allows for subject-specific graphical models, where the strength of association between nodes varies smoothly with the values of covariates. One consequence of this is that if all covariates are continuous, there is probability 0 of the graph structure changing with covariates, and only the strength of the links is affected. With binary covariates, which is the case in our motivating application, this situation does not arise, but in principle this could be seen as a limitation. On the other hand, this is a necessary consequence of continuity, and small changes in the covariates resulting in large changes in the graph, as can happen with the approach of Liu et al. (2010), make the model interpretation difficult. Further, our approach has the additional advantage of discovering exactly which covariates affect which edges, which can be more important in terms of scientific insight.

While here we focused on binary network data, the idea can be easily extended to categorical and Gaussian data, and to mixed graphical models involving both discrete and continuous data. Another direction of interest is understanding conditions under which methods based on the neighborhood selection principle of running separate regressions are preferable to pseudo-likelihood type methods, and vice versa. This comparison arises frequently in the literature, and understanding this general principle would have applications far beyond our particular method.

Acknowledgements

E. Levina’s research is partially supported by NSF grants DMS-1106772 and DMS-1159005 and NIH grant 5-R01-AR-056646-03. J. Zhu’s research is partially supported by NSF grant DMS-0748389 and NIH grant R01GM096194.

References

  • Adelaide et al. (1998) Adelaide, J., Chaffanet, M., Imbert, A., Allione, F., Geneix, J., Popovici, C., van Alewijk, D., Trapman, J., Zeillinger, R., Børrensen-Dale, A., Lidereau, R., Birnbaum, D. & Pe’busque, M. (1998). Chromosome region 8p11-p21: Refined mapping and molecular alterations in breast cancer. Genes, Chromosomes & Cancer 22 186–199.
  • Ali et al. (1987) Ali, I., Lidereau, R., Theillet, C. & Callahan, R. (1987). Reduction to homozygosity of genes on chromosome 11 in human breast neoplasia. Science 238 185–188.
  • Barabasi & Albert (1999) Barabasi, A. L. & Albert, R. (1999). Emergence of scaling in random networks. Science 509–512.
  • Bergamaschi et al. (2006) Bergamaschi, A., Kim, Y., Wang, P., Sørlie, T., Hernandez-Boussard, T., Lonning, P., Tibshirani, R., Børresen-Dale, A. & Pollack, J. (2006). Distinct patterns of dna copy number alteration are associated with different clinicopathological features and gene-expression subtypes of breast cancer. Genes, Chromosomes & Cancer 45 1033–1040.
  • Cai et al. (2011a) Cai, T., Li, H., Liu, W. & Xie, J. (2011a). Covariate adjusted precision matrix estimation with an application in genetical genomics. Biometrika 1–19.
  • Cai et al. (2011b) Cai, T., Liu, W. & Luo, X. (2011b). A constrained l1 minimization approach to sparse precision matrix estimation. J. American Statistical Association 106 594–607.
  • Conte et al. (2002) Conte, N., Charafe-Jauffret, E., Adelaide, J., Ginestier, C., Geneix, J., Isnardon, D., Jacquemier, J. & Birnbaum, D. (2002). Carcinogenesis and translational controls: Tacc1 is down-regulated in human cancers and associates with mrna regulators. Oncogene 21 5619–5630.
  • d’Aspremont et al. (2008) d’Aspremont, A., Banerjee, O. & El Ghaoui, L. (2008). First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and its Applications 30 56–66.
  • Devilce et al. (1991) Devilce, P., van Vliet, M., van Sloun, P., Dijkshoorn, N., Hermans, J., Pearson, P. & Cornelisse, C. (1991). Allelotype of human breast carcinoma: a second major site for loss of helcrozygosity is on chromosome 6q. Oncogene 6 1705–1711.
  • Friedman et al. (2008) Friedman, J., Hastie, T. & Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432–441.
  • Friedman et al. (2010) Friedman, J., Hastie, T. & Tibshirani, R. (2010). Regularized paths for generalized linear models via coordinate descent. Journal of Statistical Software 33.
  • Fu (1998) Fu, W. (1998). Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical Statistics 7 397–416.
  • Garcia et al. (1989) Garcia, I., Dietrich, P., Aapro, M., Vauthier, G., Vadas, L. & Engel, E. (1989). Genetic alterations of c-myc, c-erbb-2, and c-ha-ras protooncogenes and clinical associations in human breast carcinomas. Cancer Research 49 6675–6679.
  • Geisler et al. (2003) Geisler, S., Børresen-Dale, A., Johnsen, H., Aas, T., Geisler, J., Akslen, L., Anker, G. & Lonning, P. (2003). Tp53 gene mutations predict the response to neoadjuvant treatment with 5-fluorouracil and mitomycin in locally advanced breast cancer. Clinical Cancer Research 9 5582–5588.
  • Guo et al. (2010a) Guo, J., Levina, E., Michailidis, G. & Zhu, J. (2010a). Estimating heterogeneous graphical models for discrete data with an application to roll call voting Manuscript.
  • Guo et al. (2010b) Guo, J., Levina, E., Michailidis, G. & Zhu, J. (2010b). Joint estimation of multiple graphical models. Biometrika 98 1–15.
  • Guo et al. (2010c) Guo, J., Levina, E., Michailidis, G. & Zhu, J. (2010c). Joint structure estimation for categorical Markov networks .
  • Hassner & Sklansky (1980) Hassner, M. & Sklansky, J. (1980). The use of markov random fields as models of texture. Computer Graphics Image Processing 12 357–370.
  • Höfling & Tibshirani (2009) Höfling, H. & Tibshirani, R. (2009). Estimation of sparse binary pairwise markov networks using pseudo-likelihoods.

    Journal of Machine Learning Research

    10 883–906.
  • Huang et al. (1995) Huang, T., Yeh, P., Martin, M., Straub, R., Gilliam, T., Caldwell, C. & Skibba, J. (1995). Genetic alterations of microsatellites on chromosome 18 in human breast carcinoma. Diagnostic Molecular Pathology 4 66–72.
  • Ising (1925) Ising, E. (1925). Beitrag zur theorie der ferromagnetismus. Zeitschrift für Physik 31 253–258.
  • Lam & Fan (2009) Lam, C. & Fan, J. (2009). Sparsistency and rates of convergence in large covariance matrices estimation. Annals of Statistics 37 4254–4278.
  • Lassus et al. (2001) Lassus, H., Salovaara, R., Altonen, L. & Butzow, R. (2001). Allelic analysis of serous ovarian carcinoma reveals two putative tumor suppressor loci at 18q22-q23 distal to smad4, smad2, and dcc. American Journal of Pathology 159 35–42.
  • Liu et al. (2010) Liu, H., Chen, X., Lafferty, J. & Wasserman, L. (2010). Graph-valued regression. Proceedings of Advances in Neural Information Processing Systems (NIPS) 23.
  • MacKay et al. (1988) MacKay, J., Elder, P., Porteous, D., Steel, C., Hawkins, R., Going, J. & Chetty, U. (1988). Partial deletion of chromosome 11p in breast cancer correlates with size of primary tumour and oestrogen receptor level. British Journal of Cancer 6 710–714.
  • Manning & Schutze (1999) Manning, C. & Schutze, H. (1999). Foundations of Statistical Natural Language Processing. MIT press.
  • Mazumder & Hastie (2012) Mazumder, R. & Hastie, T. (2012). Exact covariance thresholding into connected components for large-scale graphical lasso. JMLR To appear.
  • Meinshausen & Bühlmann (2006) Meinshausen, N. & Bühlmann, P. (2006). High dimensional graphs and variable selection with the Lasso. Ann. Statist. 34 1436–1462.
  • Meinshausen & Bühlmann (2010) Meinshausen, N. & Bühlmann, P. (2010). Stability selection. Journal of the Royal Statistical Soceity 72 417–473.
  • Minella et al. (2007) Minella, A., Grim, J., Welcker, M. & Clurman, B. (2007). p53 and scffbw7 cooperatively restrain cyclin e-associated genome instability. Oncogene 26 6948–6953.
  • Mitelman et al. (1997) Mitelman, F., Merterns, F. & Johansson, B. (1997). A breakpoint map of recurrent chromosomal rearrangements in human neoplasia. Nature Genetics 15 417–474.
  • Negrini et al. (1994) Negrini, M., Sabbioni, S., Possati, L., Rattan, S., Corallini, A., Barbanti-Brodano, G. & Croce, C. (1994). Suppression of tumorigenicity of breast cancer cells by microcell-mediated chromosome transfer: Studies on chromosomes 6 and 11. Cancer Research 54 1331–1336.
  • Peng et al. (2009) Peng, J., Wang, P., Zhou, N. & Zhu, J. (2009). Partial correlation estimation by joint sparse regression model. Journal of the American Statistics Association 104 735–746.
  • Ravikumar et al. (2010) Ravikumar, P., Wainwright, M. & Lafferty, J. (2010). High-dimensional ising model selection using l1-regularized logistic regression. Annals of Statistics 38 1287–1319.
  • Ravikumar et al. (2008) Ravikumar, P., Wainwright, M., Raskutti, G. & Yu, B. (2008). Model selection in gaussian graphical models: High-dimensional consistency of l1-regularized mle. Advances in Neural Information Processing Systems(NIPS) 21.
  • Rocha et al. (2008) Rocha, G. V., Zhao, P. & Yu, B. (2008). A path following algorithm for sparse pseudo-likelihood inverse covariance estimation (splice). Tech. Rep. 759, Department of Statistics, UC Berkeley.
  • Rothman et al. (2008) Rothman, A. J., Bickel, P. J., Levina, E. & Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electronic Journal of Statistics 2 494–515.
  • Sinha et al. (2008) Sinha, S., Singh, R., Alam, N., Roy, A., Rouchoudhury, S. & Panda, C. (2008). Alterations in candidate genes PHF2, FANCC, PTCH1 and XPA at chromosomal 9q22.3 region: Pathological significance in early- and late- onset breast carcinoma. Molecular Cancer .
  • Wang et al. (2011) Wang, P., Chao, D. & Hsu, L. (2011). Learning networks from high dimensional binary data: An application to genomic instability data. Biometrics 67 164–173.
  • Witten et al. (2011) Witten, D. M., Friedman, J. H. & Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics 20 892–900.
  • Woods (1978) Woods, J. (1978). Markov image modeling. IEEE Transactions on Automatic Control 23 846–850.
  • Yang et al. (2006) Yang, Z., Streicher, K., Ray, M., Abrams, J. & Etheir, S. (2006). Multiple interacting oncogenes on the 8p11-p12 amplicon in human breast cancer. Cancer Research 66 11632–11634.
  • Yin & Li (2011) Yin, J. & Li, H. (2011). A sparse conditional gaussian graphical model for analysis of genetical genomics data. Annals of Applied Statistics 5 2630–2650.
  • Yuan (2010) Yuan, M. (2010).

    Sparse inverse covariance matrix estimation via linear programming.

    Journal of Machine Learning Research 11 2261–2286.
  • Yuan & Lin (2007) Yuan, M. & Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19–35.
  • Zhao et al. (2004) Zhao, H., Langerod, A., Ji, Y., Nowels, K., Nessland, J., Tibshirani, I., R. Bukholm, Karesen, R., Botstein, D. & Børresen-Dale, A. (2004). Different gene expression patterns in invasive lobular and ductal carcinomas of the breast. Molecular Biology of the Cell 15 2523–2536.

Appendix: Proof of Theorem 1

For notational convenience, we omit the indexing each separate regression. Following the literature, we prove the main theorem in two steps: first, we prove the result holds when assumptions A1 and A2 hold for and , the sample versions of of and defined in (7) (Proposition 1). Then we show that if A1 and A2 hold for the population versions and , they also hold for and with high probability (Proposition 2). The sample quantities and are defined as

Proposition 1

If and are satisfied by and , assume moreover that

Then with probability at least , the result of Theorem 1 holds.

Proof of Proposition 1. The proof requires several steps. The uniqueness part follows directly from the following lemma:

Lemma 1

(Shared sparsity and uniqueness of , Ravikumar et al. (2010)). Define the sign vector for to satisfy the following properties,

Suppose there exists an optimal solution with sign defined as above, such that, , then any optimal solution must have . Furthermore, if the Hessian matrix is strictly positive definite, then is the unique solution.

We now proceed to prove the rest of Proposition 1. For to be a solution of (10), the sub-gradient at must be 0, i.e.,

(15)

Then we can write , where

Let denote a point in the line segment connecting and . Applying the mean value theorem gives

(16)

where .

Now define as follows: let be the index set of true non-zeros in , let be the solution of

(17)

and let = 0. We will show that this is the optimal solution and is sign consistent with high probability.

We set the corresponding sign vector for similarly defined as in Lemma 1, and as obtained in (15). Now we need to show that with high probability,

(18)
(19)

The following three lemmas form the proof.

Lemma 2

(Control the remainder term ). For , assume a.s, then,

This probability goes to 0 as long as .

Proof of Lemma 2. We can write , where is bounded by . Thus by Azuma-Hoeffding Inequality,

Lemma 3

(-consistency of the sub-vector ). If , and, , then

Proof of Lemma 3. Let be a function . It is easy to see that is convex and it achieves its minimum at . Moreover, . Thus if we can show that is positive on the set , then we will have due to convexity of . Note that

Further,