1 Introduction
Predicting clinical risk and discovering molecular prognostic signatures are key topics in genomic studies [1, 2]. Large number of genomic datasets (e.g., TCGA [3, 4]) have been rapidly generated on cancer and other complex disease [3, 5, 6, 7]. These available cancer genomic data provides us an unprecedented opportunity to predict the development risk of cancer via integrating diverse molecular profiling data [8, 5]. Recently, some prognostic models have been proposed via integrating clinical and gene expression data [9, 10, 11]. Most of these methods are based on the Cox proportional hazards model [12]
and a few are designed based on other machine learning methods for this task. We can also easily dichotomize the survival time into a binary outcome and obtain a typical classification problem which can been solved by many supervised machine learning methods.
Logistic regression (LR) is one of such a classical method and has been widely used for classification [13]. However the traditional LR model employs all (or most) variables for predicting and lead to a nonsparse solution with limited interpretability. The sparsity principle is an important strategy for interpretable analysis in statistics and machine learning. Recently, a number of studies have focused on developing regularized or penalized LR models to encourage sparse solutions and use a limited number of variables for predicting [14, 15, 16, 17, 18, 19, 20]. Many of the penalized LR models apply Lasso as a penalty function to induce sparsity. However the Lasso fails to select strongly correlated variables together and tends to select a variable from them [21]. Thus, many generalizations of the Lasso have been proposed to solve the limits of Lasso [22], including elastic net, group Lasso, and fused Lasso, etc. On the other hand, some researchers proposed refined regression models with networkbased penalties [23, 24, 25]. These models are expect to get more accurate prediction and better interpretability via integrating prior knowledge.
Motivated by the development of sparse coding and networkregularized norm [23, 26], we address the double tasks – feature selection and class prediction – by using a networkbased penalty in the Logistic Regression (LR) framework. Specifically, we first focus on a traditional networkregularized penalty:
(1) 
where is the normalized Laplacian matrix encoding a prior network (e.g., a protein interaction network). The first term is a norm penalty to induce sparsity. The second term is a quadraticLaplacian norm penalty to force the coefficients of to be smooth. More importantly, it is a convex penalty. Thus, such a regularized LR model and its reduced forms can be solved effectively. However if those estimated and have opposite signs, then the traditional networkregularized penalty may not perform well. To address this limitation, we focus on a novel networkbased penalty [27]:
(2) 
where which is adopted to eliminate the effects of symbols of the estimated coefficients. However, the novel penalty is not differentiable at the zero point. Intuitively, it is a challenge issue to solve such a regularized LR model using the conventional gradient descent method. Here we develop two methods to solve it. We first propose to use the following penalty:
to replace , i.e., , where is computed using the maximum likelihood estimation for classical LR model. The new penalty is a convex function and has been used for a regression model [24]. Similarly, we can effectively solve this regularized LR model. We also try to solve the novel penalty () directly. Fortunately, Hastie et al. [28] (see Page 112) find that has a good property (i.e., condition regularity [28, 29]), which inspires us to solve it via a cycle coordinate descent algorithm.
To sum up, our key contributions are twofold. First, we introduce a unified Networkregularized Sparse LR (NSLR) framework enabling the Lasso, elastic net and networkregularized LR models are all special cases of it. More importantly, this framework can be efficiently solved using a coordinatewise Newton algorithm. Second, we propose a novel networkregularized LR model to eliminate the sensitivity of the conventional networkregularized penalty to the signs of estimated coefficients. Here we adopt an approximate algorithm and a coordinatewise Newton algorithm to solve it, respectively. Finally, we apply our methods to Glioblastoma multiforme (GBM) gene expression and cancer clinical data from TCGA database [3], and a protein interaction network data from Pathway Commons [30]
for survival risk prediction and biomarker discovery. Our first key is to identify some gene biomarkers for survival risk prediction. In addition, based on the prediction probability scores of GBM patients, we can divide them into different subtypes relating to survival output. Furthermore, we apply our methods to a lung cancer dataset with two subtypes including Lung adenocarcinoma (LUAD) and Lung squamouscell carcinoma (LUSC) to identify subtypespecific biomarkers.
2 Method
In this section, we first briefly review the typical logistic regression model and an iteratively reweight least squares (IRLS) learning strategy based on a coordinatewise Newton method. We further introduce the two networkregularized LR models together with the basic regularized LR models (Lasso and elastic net) in a unified framework in Section 2.3 and suggest a general coordinatewise Newton algorithm for solving this framework. Lastly, we propose a novel networkregularized sparse LR model in Section 2.5. We can use an approximate strategy (AdaNet.LR in Section 2.3) to solve it. In addition, we also develop an efficient coordinatewise Newton algorithm to solve the AdaNet.LR directly (Section 2.5).
2.1 Logistic Regression
Here we consider a binary classification problem. Given training samples where is a
dimensional column vector and label
. We first write the logistic function as follows:For convenience, let and . We can rewrite it:
(3) 
where is the weight vector of coefficients, and
is a sigmoid function. Here we assume that the
training examples are generated independently. We thus can obtain the following loglikelihood:(4)  
2.2 A Learning Algorithm for Logistic Regression
Clearly, the above likelihood is a convex function. Thus we can maximize the likelihood via its gradient equations with respect to :
(5)  
where is a dimensional column vector, is a matrix and is also a dimensional column vector. Here we adapt a Newton (or NewtonRaphson) algorithm to maximize problem (4). To this end, we first compute the Hessian matrix of Eq. (4).
(6)  
where is a diagonal matrix with . Starting with , we thus can obtain a Newton update rule:
(7) 
Combining Eq. (5), Eq. (6) and Eq. (7), we can rewrite the Newton update step as follows:
(8)  
where . We can see that the Newton update step is equivalent to solve the following linear equations with respect to :
(9) 
Note that the diagonal matrix and the vector are updated using the following two formulas:
(10)  
(11) 
Since is a symmetric positive semidefinite matrix, we can obtain the optimal solution of linear system (9) via minimizing the following Quadratic Program (QP) problem:
(12) 
That is to say that each Newton update step in Eq. (7) is equivalent to minimize a QP problem. To avoid inverse matrix operation, we apply a cyclic coordinate descent method [31, 32] to solve these QP problems. After a few steps of Newton update, we can get the optimal solution of Eq. (4). This process is also known as the iteratively reweight least squares (IRLS) strategy [33, 34].
2.3 A Unified Regularized Logistic Regression Framework
Here we first introduce the Networkregularized Sparse LR (NSLR) models in a unified regularized LR framework as follows:
(13) 
with
(14) 
where denotes the number of samples and . We can solve this framework via the above IRLS strategy. In other words, each Newton update step of Eq. (13) is equivalent to solving a new QP problem. For convenience, we define the coefficient vector and , where is the intercept. Then, the QP problem can be defined as follows:
(15) 
where is a generalized networkregularized penalty. With different in , the penalty reduces to different cases including Lasso, Elastic net and two networkregularized ones.
Case 1: . The penalty function Eq. (14) reduces to , which is the Lasso penalty [35]. We denote this case as Lasso.LR. It has been widely used in bioinformatics [14, 15, 17]. However, Lasso has some potential limits [21]. For example, it fails to select strongly correlated variable group and only selects one variable from such group and ignores the others in it.
Case 2: . The penalty function Eq. (14) reduces to , which is the socalled Elastic net penalty [21]. We denote this case as Elastic.LR which has adopted in [21]. However, to incorporate the known biological networks, a networkregularized penalty is needed in LR model.
Case 3: , where is a symmetric normalized Laplacian matrix and is the adjacency matrix for a given prior network (e.g., a protein interaction network). If vertex and vertex are connected, then and otherwise. The degree of vertex is defined by . Thus, can be rewritten as follows:
(16) 
Li and Li [23] applied such a networkregularized regression model for analyzing genomic data. Zhang et al. [36] applied this penalty into the LR model for molecular pathway identification. We denote the Networkregularized LR model as Network.LR. They solve this model using the CVX package which was implemented for solving convex optimization problems [37]. In this paper, compared with the method based on CVX, we develop a simple coordinatewise Newton algorithm to avoid inverse matrix operation. However, the typical networkregularized penalty ignores that the pairwise variables of coefficients (linked in the prior network) may have opposite signs.
Case 4: . To consider the signs of the estimated vector , we can adopt an adaptive networkbased penalty :
where , and is computed using the maximum likelihood estimation for the classical LR model. It can considered as an approximation of using . At same time, can be rewritten as:
(17) 
Here we denote the Adaptive Networkregularized LR as AdaNet.LR.
In addition to the four cases, we also consider a novel networkregularized penalty: [27], to consider the opposite signs of pairwise variables directly. The new penalty can eliminate the sensitivity of the typical one to the signs of the feature correlation. However, it is nondifferentiable at zero point and thus the general gradient descent method cannot solve AbsNet.LR directly. In the Section 2.5, we will introduce a clever way to solve it. Note that the AdaNet.LR model can be regarded as an approximation of the AbsNet.LR model.
2.4 A Learning Algorithm for NSLR
Here we apply a coordinatewise Newton algorithm to solve the NSLR framework. We first use a cyclic coordinate descent algorithm [31, 32] to solve the QP problem in Eq. (15). Without loss of generality, we extend the matrix to a matrix as follows:
Let and . We can obtain a unified form of Eq. (15):
(18) 
where . We can easily prove that the objective function (18) is a convex one. To minimize it, we first obtain the gradient of as follows:
(19) 
where is a column vector. Let , and . Furthermore, we can also obtain the gradient with respect to :
(20) 
where if , otherwise. Let
thus we have the following update rule for :
(21) 
where the softthresholding function is defined as
Similarly, we can also get the update rule for the bias term (note that ). Given , then we write the gradient of :
(22) 
Let . It leads to the update rule for :
(23) 
Briefly, here we employ the iteratively reweight least squares (IRLS) strategy to solve the unified regularized LR framework. In each iteration, we first update Eq. (10) and Eq. (11) to get a new constrained QP problem (18). Then we apply a cycle coordinate descent algorithm to solve it. This process is repeated until convergence. To summarize, we propose the following Algorithm 1.
2.5 A Novel Networkregularized LR
In the subsection, we focus on a novel Networkregularized Logistic Regression model with absolute operation (AbsNet.LR) as follows:
(24) 
where and . As we have discussed that AdaNet.LR model can be considered as an approximation of it (see Section 2.3 for Case 4). AdaNet.LR applies a convex penalty to replace the one in Eq. (24) enabling it can be solved by NSLR algorithm (Algorithm 1). In the subsection we employ a coordinatewise Newton algorithm to solve it directly. For each Newton update step, we focus on the following optimization problem:
(25) 
where . The coordinatewise descent strategy does not work for some penalties (e.g., fused lasso) [28]. However, Hastie et al. [28] (see page 112) find the penalty shows a good property: condition regularity, implying that if the iteration moving along all coordinate directions fails to enable the objective function decrease, then it achieves the minimum [28, 29]. Thus we can minimize Eq. (25) via a cycle coordinate descent method. For convenience, let , and , we can rewrite the subproblem as follows:
(26)  
Then we have the subgradient equation of :
(27)  
Let and . We obtain the following update rule:
(28) 
Similarly, we can also get the update rule for the bias term . The subgradient is given by
(29) 
where , and . Let . It leads to the update rule:
(30) 
To summarize, we propose the Algorithm 2 to solve the AbsNet.LR model.
Tuning parameter selection. Selecting the regularized parameters for NSLR and AbsNet.LR is a very important task. Here these parameters are selected via maximizing the index – Area Under Curve (AUC). Suppose there are positive class samples and
negative class samples in a given dataset. Given a binary classifier,
are the scores for the positive points and are the scores for the negative points. The AUC of this classifier is calculated using the following formula:(31) 
where is 1, if , and 0 otherwise. However, sometimes the parameter selection may be unnecessary in a real application. For example, we may choose suitable parameters to obtain a solution for the desired degree of sparsity.
3 Synthetic Data Experiments
To compare the performance of different regularized LR models, we first generate a sparse coefficient vector with and a bias term as follows:
where
denotes a vector of length 40, whose elements are sampled from a standard normal distribution, and
denotes a vector of length 60, whose entries are zero. To generate an expression matrix , we define a covariance matrix of the variables , where when , when and the others entries are zero. It can ensure that the 40 feature vectors are strong correlated in . We generate the using the function “mvrnorm” in the MASS R package with parameters and . Furthermore, we also compute the binary responsebased on a Bernoulli distribution with the following formula:
where denotes the th column of . Finally, we also generate a prior network , whose node pairs among all the first 40 nodes are connected with probability , and the remaining ones are connected with probability . Note we set the observed matrix , where the elements of are randomly sampled from a standard normal distribution, and is used to control the signaltonoise ratio. The observed contains 500 samples with 100 features. We randomly select 300 samples as the training samples and the remaining samples as the testing samples. We test all regularized LR models on the synthetic training data using 5fold crossvalidation strategy to select the optimal parameters. We repeat the simulations over 50 times. We then compute the average AUCs of 50 experiments about classification on the testing synthetic data. To compare the performance on variable selection, we also calculate the average sensitivity/specificity scores with respect to the selected variables (nonzero of ), and the average numbers of edges of the selected variables in the prior network.
Here we define the sensitivity as the percentage of true nonzero entries discovered, and the specificity as the percentage of true zero entries discovered, respectively. The overall results are shown in Table 1. Generally, these networkbased regularized LR models (especially AbsNet.LR) are superior to other algorithms with respect to AUCs. Moreover, AbsNet.LR obtains the relatively higher sensitivity in the selected variables, and relatively larger number of edges of them in the prior network compared to others.
no. of  

methods  AUC  nonzero  edges  sensitivity  specificity 
LR  0.733  100  450.00  1.00  0.00 
Lasso.LR  0.792  45.84  134.22  0.67  0.68 
Elastic.LR  0.806  52.76  176.62  0.79  0.64 
Network.LR  0.828  54.64  234.42  0.97  0.74 
AdaNet.LR  0.823  53.20  196.40  0.88  0.70 
AbsNet.LR  0.830  63.50  281.78  0.98  0.60 
4 Application to GBM data
4.1 The GBM Data
We download the level 3 gene expression data (Broad Institute HTHGU133A platform) and clinical data of GBM from the TCGA database [3]
. We employ two alternative methods to impute the missing data in gene expression profiling: (1) the
nearest neighbor algorithm (knn) method implemented as the “impute” R package; (2) the mean of considered genes. We find that the two methods only have little effect on the final results. Thus, we simply adopt the second method to impute the missing data. We obtain the gene expression data of 1,2042 genes across 525 patients. Furthermore, we standardize the expression of each gene across all samples using the “scale” function in R. We also download a protein interaction network (PPI) data from Pathway Commons
[30]. Finally, we obtain a gene expression data with 1,0963 genes and a PPI network with 24,8403 edges. Here our goal of this application is to predict the survival risk of GBM patients (Fig. 1A). We first dichotomize the survival time of patients into a binary outcome through a designated cutoff. Here we consider one year as the cutoff time to balance the number of positive and negative samples (Fig. 1B and Table 2).data  #genes  #samples  #alive  #death  #censoring 

GBM  1,0963  525  236  204  115 
We apply the univariate Cox proportional hazard model [12] to assess and filter the 1,0963 genes of GBM gene expression data (Table 2). Finally, we obtain 2001 genes with for further analysis. Only those genes are used in all the regularized LR models.
4.2 Results of the GBM Data
We first randomly select 2/3 samples ( = 350) as the training samples and 1/3 samples ( = 175) as the testing samples (Fig. 1B). We consider and to form a total of 42 pair parameter sets. We first learn the different regularized LR methods on the GBM training data using 5fold crossvalidation. Then we test all the methods on the GBM testing data. We find the regularized LR models are superior to the typical LR model (Table 3). Generally, Lasso.LR is inferior to other regularized methods (Elastic.LR, Network.LR, AdaNet.LR and AbsNet.LR) whose results are relatively consistent. However, AbsNet.LR, AdaNet.LR and Network.LR identify a gene set with more edges via integrating the prior network data. All these results imply the importance of integrating the prior protein interaction network data to improve the prediction accuracy and biological interpretation. Therefore, we only focus on the result analysis of AbsNet.LR which obtains AUC with and , and a gene set with 157 genes.
methods  avg.no.gene  avg.no.edge  avg.AUC  max.AUC 

LR  2001  6341  0.5787  0.5787 
Lasso.LR  152.2619  45.0714  0.5673  0.6182 
Elastic.LR  324.1429  296.9762  0.5842  0.6659 
Network.LR  325.6905  328.1905  0.5832  0.6660 
AdaNet.LR  331.7857  448.9524  0.5823  0.6659 
AbsNet.LR  410.6429  1013.6190  0.5860  0.6627 
To evaluate the biological relevance of the identified gene set, we apply the DAVID online web tool [38] to perform the Gene Ontology (GO) biological processes enrichment analysis. The GO biological processes with values 0.05 are selected as significant ones. We find the genes is related to some important ones including positive regulation of kinase activity, synaptic transmission glutamatergic and inositol metabolic process. The identified biological processes are consistent with recent literature reports. For example ‘the positive regulation of protein kinase activity’ process is wellknown to be related to GBM and its activation is a key mechanism for GBM development [39].
Furthermore, we extract the top 30 genes corresponding to the largest absolute values of the estimated coefficients (by the AbsNet.LR). Based on the top 30 genes, we retrain the typical LR model in the training data. In the independent testing data, LR obtain an AUC of 0.6727. However, on average, only AUC of 0.55 are obtained with randomly select 30 genes. Next, based on the built LR model, we retest all the training data and all testing data (Fig. 1B) and based on the predicted LR scores computed via , we divide all the training and testing samples into three groups, called low, intermediate and highrisk ones, respectively (Fig. 2A and B). We can also see the expression heat maps of the top 30 genes with the three groups (Fig. 2E and 2F). We find that some genes in the specific group are high expressed, while some are low expressed in the other group. Interestingly, we also find that these top genes form a set of subnetworks (e.g., Fig. 3A and B), which are sugested to be related with GBM [40, 41, 42, 43]. Furthermore, we study whether the defined groups have significant relationships with some other clinical variables (e.g., age). We find that the patients of highrisk group are old than those in low and intermediaterisk group on both training and testing data (Fig. 2C and D). All these results show that the identified gene set is related to the survival outcome of GBM patients.
5 Application to Lung cancer data
5.1 NonSmall Cell Lung Cancer Expression Data
We download two subtypes of nonsmall cell lung cancer gene expression datasets from TCGA [4]
, including Lung adenocarcinoma (LUAD) with 350 samples and Lung squamouscell carcinoma (LUSC) with 315 samples. Here we consider the level 3 expression data of TCGA which is quantified at the gene and transcript levels using RNASeq data by an expectation maximization method
[44]. Then the expression values are transformed. We first process the LUAD and LUSC gene expression data to keep the genes that are appeared in the KEGG pathway database from the Molecular Signatures Database (MSigDB) [45]. Finally, we obtain 3230 genes related with 151 KEGG pathways. Here we consider each KEGG pathway as a fully connected subnetwork to build a prior network with 143438 edges.5.2 Results of the Lung Cancer Data
Here we apply all the methods to the lung cancer data to identify subtypespecific biomarkers. Compared to the integration of PPI network in the GBM data, we focus on the KEGG pathway as the prior information. We randomly select all the samples of 2/3 as the training samples and remaining ones are as the independent test samples. We show the results from all the regularized LR models in the independent training set (Table 4). In total, 69 genes are selected by the AbsNet.LR. All the genes identified by other regularized LR models (Lasso.LR, Elastic.LR, Network.LR and AdaNet.LR) are included in the 69 genes. The networkregularized LR models give similar results, which are superior to that of typical LR and Lasso.LR models. In addition, compared to the other networkregularized LR models, AbsNet.LR identifies a gene set with more edges in the KEGG pathway network.
methods  AUC  no. of genes  no. of edges 

LR  0.7540  3230  – 
Lasso.LR  0.9784  48  53 
Elastic.LR  0.9789  57  62 
Network.LR  0.9789  59  67 
AdaNet.LR  0.9789  57  62 
AbsNet.LR  0.9794  69  104 
Furthermore, we evaluate the biological relevance of the identified 69 genes using the DAVID online web tool [38] and find several significantly enriched KEGG pathways relating to lung cancer, including metabolism of xenobiotics by cytochrome P450 (GSTA2, CYP3A5, CYP2F1, CYP3A7, CYP2C9, CYP2C8, UGT2A1) [46], linoleic acid metabolism (CYP3A5, CYP3A7, CYP2C9, AKR1B10, CYP2C8) [47] and retinol metabolism (CYP3A5, CYP3A7, CYP2C9, CYP2C8, UGT2A1, RPE65) [48] and so on.
6 Computing Platform and Running Time
All scripts were run on a desktop personal computer with an Inter(R) Core(TM) i74770 CPU@ 3.4 GHz and 16 GB memory running Windows OS and R 3.2.5. The code is available at http://page.amss.ac.cn/shihua.zhang/. For the synthetic data experiments, the memory used is about 140 MB and the running time is about 0.7 hours. For the application to GBM data, the memory used is about 150 MB and the running time is about 0.8 hours. For the application to lung cancer data, the memory used is about 160 MB and the running time is about 1 hour.
7 Conclusion
In this paper, we first introduce the typical networkregularized LR models with others in a unified framework. Although the typical networkregularized LR model can incorporate such prior information to get better biological interpretability, it fails to focus on the opposite effect of variables with different signs. To solve this limitation, we adopt a novel networkregularized penalty into LR model (denoted as AbsNet.LR). However, the novel penalty is not differentiable at the zero point, enforcing it is hard to be solved using the typical gradient descent method. To this end, we first adapt the adaptive networkregularized penalty (note that it is convex) to approximate this novel networkregularized penalty and develop an adaptive networkregularized LR (AdaNet.LR) model which can be solved easily using the NSLR algorithm (Algorithm 1). We further find that the novel networkregularized penalty has a good property – condition regularity [28, 29], which inspires us to solve it via a cycle coordinatewise Newton algorithm efficiently. We note that the binary LRbased classification models can be easily extended to the multiclass or multinomial classification problem [34, 49].
Applications to the synthetic data show that the present methods are more effective compared to the typical ones. We also apply them to the GBM expression and a protein interaction network data to predict mortality probability of patients within one year. In addition, we apply them to a lung cancer expression data of two subtypes to identify subtypespecific biomarkers. We find a gene set with 69 genes which are tightly connected in the prior network. Functional enrichment analysis of these genes discovers a few KEGG pathways relating to the lung cancer clearly.
Acknowledgment
Shihua Zhang and Juan Liu are the corresponding authors of this paper. Wenwen Min would like to thank the support of National Center for Mathematics and Interdisciplinary Sciences, Academy of Mathematics and Systems Science, CAS during his visit. This work was supported by the National Science Foundation of China [61379092, 61422309, 61621003], the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) (XDB13040600), the Outstanding Young Scientist Program of CAS, CAS Frontier Science Research Key Project – Top Young Scientist (No. QYZDBSSWSYS008) and the Key Laboratory of Random Complex Structures and Data Science, CAS.
References
 [1] Z. Liu, X.S. Zhang, and S. Zhang, “Breast tumor subgroups reveal diverse clinical prognostic power,” Scientific reports, vol. 4, 4002; DOI:10.1038/srep04002, 2014.
 [2] X. Yang, L. Gao, and S. Zhang, “Comparative pancancer DNA methylation analysis reveals cancer common and specific patterns,” Briefings in Bioinformatics, pp. 1––13; bbw063, 2016.
 [3] C. Benz, J. BarnholtzSloan, W. Barrett, Q. Ostrom, Y. Wolinsky, K. Black, B. Bose, P. Boulos, M. Boulos, and J. Brown, “The somatic genomic landscape of glioblastoma,” Cell, vol. 155, no. 2, pp. 462–477, 2013.
 [4] K. A. Hoadley, C. Yau, D. M. Wolf, A. D. Cherniack, D. Tamborero, S. Ng, M. D. M. Leiserson, B. Niu, M. D. Mclellan, and V. Uzunangelov, “Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin,” Cell, vol. 158, no. 4, pp. 929–944, 2014.
 [5] V. N. Kristensen, O. C. Lingjærde, H. G. Russnes, H. K. M. Vollan, A. Frigessi, and A.L. BørresenDale, “Principles and methods of integrative genomic analyses in cancer,” Nature Reviews Cancer, vol. 14, no. 5, pp. 299–313, 2014.
 [6] J. Zhao, S. Zhang, L.Y. Wu, and X.S. Zhang, “Efficient methods for identifying mutated driver pathways in cancer,” Bioinformatics, vol. 28, no. 22, pp. 2940–2947, 2012.
 [7] S. Zhang, C.C. Liu, W. Li, H. Shen, P. W. Laird, and X. J. Zhou, “Discovery of multidimensional modules by integrative analysis of cancer genomic data,” Nucleic acids research, vol. 40, no. 19, pp. 9397–9391, 2012.
 [8] S. Zhang, Q. Li, J. Liu, and X. J. Zhou, “A novel computational framework for simultaneous integration of multiple types of genomic data to identify microRNAgene regulatory modules,” Bioinformatics, vol. 27, no. 13, pp. i401–i409, 2011.
 [9] J. Pittman, E. Huang, H. Dressman, C.F. Horng, S. H. Cheng, M.H. Tsou, C.M. Chen, A. Bild, E. S. Iversen, A. T. Huang et al., “Integrated modeling of clinical and gene expression information for personalized prediction of disease outcomes,” Proceedings of the National Academy of Sciences of the United States of America, vol. 101, no. 22, pp. 8431–8436, 2004.
 [10] W. Zhang, T. Ota, V. Shridhar, J. Chien, B. Wu, and R. Kuang, “Networkbased survival analysis reveals subnetwork signatures for predicting outcomes of ovarian cancer treatment,” Plos Computational Biology, vol. 9, no. 3, pp. 760–766, 2013.
 [11] N. Simon, J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for cox’s proportional hazards model via coordinate descent.” Journal of Statistical Software, vol. 39, no. 5, pp. 1–13, 2011.
 [12] D. R. Cox, “Regression models and lifetables,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 34, no. 2, pp. 187–220, 1972.
 [13] J. Liao and K.V. Chin, “Logistic regression for disease classification using microarray data: model selection in a large p and small n case,” Bioinformatics, vol. 23, no. 15, pp. 1945–1951, 2007.
 [14] S. K. Shevade and S. S. Keerthi, “A simple and efficient algorithm for gene selection using sparse logistic regression.” Bioinformatics, vol. 19, no. 17, pp. 2246–2253, 2003.
 [15] G. C. Cawley and N. L. C. Talbot, “Gene selection in cancer classification using sparse logistic regression with bayesian regularization,” Bioinformatics, vol. 22, no. 19, pp. 2348–2355, 2006.
 [16] T. T. Wu, Y. F. Chen, T. Hastie, E. Sobel, and K. Lange, “Genomewide association analysis by lasso penalized logistic regression,” Bioinformatics, vol. 25, no. 6, pp. 714–721, 2009.
 [17] J. Bootkrajang and A. Kabán, “Classification of mislabelled microarrays using robust sparse logistic regression.” Bioinformatics, vol. 29, no. 7, pp. 870–877, 2013.

[18]
M. Tan, I. W. Tsang, and L. Wang, “Minimax sparse logistic regression for very
highdimensional feature selection.”
IEEE Transactions on Neural Networks and Learning Systems
, vol. 24, no. 10, pp. 1609–1622, 2013.  [19] Y. Lin, M. Yu, S. Wang, R. Chappell, and T. F. Imperiale, “Advanced colorectal neoplasia risk stratification by penalized logistic regression,” Statistical Methods in Medical Research, vol. 25, no. 4, pp. 1677–1691, 2016.
 [20] H. Park, Y. Shiraishi, S. Imoto, and S. Miyano, “A novel adaptive penalized logistic regression for uncovering biomarker associated with anticancer drug sensitivity,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, no. 1, pp. 1, PrePrints, 2016.
 [21] Z. Y. Algamal and M. H. Lee, “Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification,” Computers in Biology and Medicine, vol. 67, pp. 136–145, 2015.
 [22] R. Tibshirani, “Regression shrinkage and selection via the lasso: a retrospective,” Journal of the Royal Statistical Society, vol. 73, no. 3, pp. 273–282, 2011.
 [23] C. Li and H. Li, “Networkconstrained regularization and variable selection for analysis of genomic data,” Bioinformatics, vol. 24, no. 9, pp. 1175–1182, 2008.

[24]
——, “Variable selection and regression analysis for graphstructured covariates with an application to genomics,”
The annals of applied statistics, vol. 4, no. 3, pp. 1498–1516, 2010.  [25] S. Zhe, S. A. Naqvi, Y. Yang, and Y. Qi, “Joint network and node selection for pathwaybased genomic data analysis.” Bioinformatics, vol. 29, no. 16, pp. 1987–96, 2013.
 [26] D. Cai, X. He, J. Han, and T. S. Huang, “Graph regularized nonnegative matrix factorization for data representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1548–1560, 2011.
 [27] W. Min, J. Liu, and S. Zhang, “L0norm sparse graphregularized svd for biclustering,” arXiv preprint arXiv:1603.06035, 2016.
 [28] T. Hastie, R. Tibshirani, and M. Wainwright, Statistical learning with sparsity: the lasso and generalizations. CRC Press, London, 2015.
 [29] P. Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” Journal of Optimization Theory and Applications, vol. 109, no. 3, pp. 475–494, 2001.
 [30] E. G. Cerami, B. E. Gross, E. Demir, I. Rodchenkov, O. Babur, N. Anwar, N. Schultz, G. D. Bader, and C. Sander, “Pathway commons, a web resource for biological pathway data.” Nucleic Acids Research, vol. 39, no. Database issue, pp. 685–90, 2010.
 [31] J. Friedman, T. Hastie, H. Höfling, R. Tibshirani et al., “Pathwise coordinate optimization,” The Annals of Applied Statistics, vol. 1, no. 2, pp. 302–332, 2007.
 [32] J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear models via coordinate descent,” Journal of statistical software, vol. 33, no. 1, pp. 1–22, 2010.
 [33] P. J. Green, “Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives,” J.royal Stat.soc.ser.b, vol. 46, no. 2, pp. 149–192, 1984.
 [34] B. Krishnapuram, L. Carin, M. A. Figueiredo, and A. J. Hartemink, “Sparse multinomial logistic regression: Fast algorithms and generalization bounds,” IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 6, pp. 957–968, 2005.
 [35] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
 [36] W. Zhang, Y. Wan, G. I. Allen, K. Pang, M. L. Anderson, and Z. Liu, “Molecular pathway identification using biological networkregularized logistic models,” BMC genomics, vol. 14, no. Suppl 8, pp. 1–8, 2013.
 [37] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” Mar. 2014.
 [38] D. W. Huang, B. T. Sherman, and R. A. Lempicki, “Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources,” Nature protocols, vol. 4, no. 1, pp. 44–57, 2009.
 [39] N. Azoitei, A. Kleger, N. Schoo, D. R. Thal, C. Brunner, G. V. Pusapati, A. Filatova, F. Genze, P. Möller, T. Acker et al., “Protein kinase d2 is a novel regulator of glioblastoma growth and tumor formation,” Neurooncology, vol. 13, no. 7, pp. 710–724, 2011.
 [40] J. Viotti, E. Duplan, C. Caillava, J. Condat, T. Goiran, C. Giordano, Y. Marie, A. Idbaih, J. Y. Delattre, and J. Honnorat, “Glioma tumor grade correlates with parkin depletion in mutant p53linked tumors and results from loss of function of p53 transcriptional activity.” Oncogene, vol. 33, no. 14, pp. 1764–75, 2013.
 [41] D. Biasoli, “Glioblastoma cells inhibit astrocytic p53expression favoring cancer malignancy,” Oncogenesis, vol. 3, no. e123, 2014.
 [42] X. Hu, D. Chen, Y. Cui, Z. Li, and J. Huang, “Targeting microrna23a to inhibit glioma cell invasion via hoxd10.” Scientific Reports, vol. 3, no. 7478, pp. 182–182, 2013.
 [43] A. Calzolari, L. M. Larocca, S. Deaglio, V. Finisguerra, A. Boe, C. Raggi, L. RicciVitani, F. Pierconti, F. Malavasi, and R. D. Maria, “Transferrin receptor 2 is frequently and highly expressed in glioblastomas,” Translational Oncology, vol. 3, no. 2, pp. 123–134, 2010.
 [44] C. B. Do and S. Batzoglou, “What is the expectation maximization algorithm?” Nature biotechnology, vol. 26, no. 8, pp. 897–899, 2008.
 [45] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, A. Paulovich, S. L. Pomeroy, T. R. Golub, E. S. Lander et al., “Gene set enrichment analysis: a knowledgebased approach for interpreting genomewide expression profiles,” Proceedings of the National Academy of Sciences, vol. 102, no. 43, pp. 15 545–15 550, 2005.
 [46] J. Hukkanen, O. Pelkonen, J. Hakkola, and H. Raunio, “Expression and regulation of xenobioticmetabolizing cytochrome p450 (cyp) enzymes in human lung.” Critical Reviews in Toxicology, vol. 32, no. 5, pp. 391–411, 2002.
 [47] J. Liu, P. J. Mazzone, J. P. Cata, A. Kurz, M. Bauer, E. J. Mascha, and D. I. Sessler, “Serum free fatty acid biomarkers of lung cancer,” Chest, vol. 146, no. 3, pp. 670–679, 2014.
 [48] E. S. Kuznetsova, O. L. Zinovieva, N. Y. Oparina, M. M. Prokofjeva, P. V. Spirin, I. A. Favorskaya, I. B. Zborovskaya, N. A. Lisitsyn, V. S. Prassolov, and T. D. Mashkova, “Abnormal expression of genes that regulate retinoid metabolism and signaling in nonsmallcell lung cancer,” Molecular Biology, vol. 50, no. 2, pp. 220–229, 2016.
 [49] G. C. Cawley, N. L. Talbot, and M. Girolami, “Sparse multinomial logistic regression via bayesian L1 regularisation,” Advances in Neural Information Processing Systems, vol. 19, pp. 209–216, 2007.
Comments
There are no comments yet.