In many classification and regression problems, the objective is to select a sparse vector of relevant features. For example in biological applications, DNA microarray and new RNA-seq devices provide high dimensional gene expression (typically 20,000 genes). The challenge is to select the smallest number of genes (the so-called biomarkers) which are necessary to achieve accurate biological classification and prediction. A popular approach to recover sparse feature vectors (under a condition of mutual incoherence) is to solve a convex optimization problem involving a data fidelity termand the norm [6, 17, 19, 34]. Recent Lasso penalty regularization methods take into account correlated data using either the pairwise penalty [24, 27, 35] or the pairwise penalty  (see also  for further developments). The sparsity or grouping constrained classification problem can be cast as the minimization of a smooth convex loss subject to an or a pairwise constraint, say . Most of the existing work has focused on Lagrangian penalty methods, which aim at solving the unconstrained problem of minimizing . Although, under proper qualification conditions, there is a formal equivalence between constrained and unconstrained Lagrangian formulations [3, Chapter 19], the exact Lagrange multiplier can seldom be computed easily, which leaves the properties of the resulting solutions loosely defined. The main contribution of the present paper is to propose an efficient splitting algorithm to solve the constrained formulation
directly. As discussed in , the bound defining the constraint can often be determined from prior information on the type of problem at hand. Our splitting algorithm proceeds by alternating a gradient step on the smooth classification risk function and a projection onto the lower level set . The main focus is when models the , pairwise constraint, or pairwise constraint. The projection onto the lower level set is implemented via an outer projection procedure which consists of successive projections onto the intersection of two simple half-spaces. The remainder of the paper is organized as follows. Section II introduces the constrained optimization model. Section III presents our new splitting algorithm, which applies to any constrained smooth convex optimization problem. In particular we also discuss the application to regression problems. Section IV presents experiments on both synthetic and real classical biological and genomics data base.
Ii Classification risk and convex constraints
Ii-a Risk minimization
We assume that samples in are available. Typically , where is the dimension of the feature vector. Each sample is annotated with a label taking its value in
. The classification risk associated with a linear classifier parameterized by a vectoris given by
We restrict our investigation to convex losses which satisfy the following assumption.
Let be an increasing Lipschitz-continuous function which is antisymmetric with respect to the point , integrable, and differentiable at with . The loss is defined by
The main advantage of this class of smooth losses is that it allows us to compute the posterior estimation. The function relates a prediction of a sample
to the posteriori probability for the classvia
This property will be used in Section IV to compute without any approximation the area under the ROC curve (AUC). The loss in Assumption 1 is convex, everywhere differentiable with a Lipschitz-continuous derivative, and it is twice differentiable at with . In turn, the function of (2) is convex and differentiable, and its gradient
has Lipschitz constant
Applications to classification often involve normalized features. In this case, (6) reduces to . Examples of functions which satisfy Assumption 1 include that induced by the function , which leads to the logistic loss , for which . Another example is the Matsusita loss 
which is induced by .
Ii-B Sparsity model
In many applications, collecting a sufficient amount of features to perform prediction is a costly process. The challenge is therefore to select the smallest number of features (genes or biomarkers) necessary for an efficient classification and prediction. The problem can be cast as a constrained optimization problem, namely,
It has been shown in the context of compressed sensing that under a so-called restricted isometry property, minimizing with the norm is tantamount to minimizing with the penalty in a sense made precise in .
Ii-C Grouping model
Let us consider the graph of connected features . The basic idea is to introduce constraints on the coefficients for features and connected by an edge in the graph. In this paper we consider two approaches: directed acyclic graph and undirected graph. Fused Lasso  encourages the coefficients and of features and connected by an edge in the graph to be similar. We define the problem of minimizing under the directed acyclic graph constraint as
for some suitable parameters . In the second, undirected graph, approach  one constrains the coefficients of features and connected by an edge using a pairwise constraint. The problem is to
To approach the constrained problems (9) and (10), state of the art methods employ a penalized variant [18, 19, 21, 34]. In these Lagrangian approaches the objective is to minimize , where aims at controlling sparsity and grouping, and where the constraints are defined by one of the following (see (9), (10), and (11))
The main drawback of current penalty formulations resides in the cost associated with the reliable computation of the Lagrange multiplier using homotopy algorithms [18, 21, 22, 28]. The worst complexity case is , which is usually intractable on real data. Although experiments using homotopy algorithms suggest that the actual complexity is 
, the underlying path algorithm remains computationally expensive for high-dimensional data sets such as the genomic data set. To circumvent this computational issue, we propose a new general algorithm to solve either the sparse (9) or the grouping (10) constrained convex optimization problems directly.
Ii-D Optimization model
Our classification minimization problem is formally cast as follows.
Iii Splitting algorithm
In this section, we propose an algorithm for solving constrained classification problem (15). This algorithm fits in the general category of forward-backward splitting methods, which have been popular since their introduction in data processing problem in ; see also [12, 13, 30, 32, 33]. These methods offer flexible implementations with guaranteed convergence of the sequence of iterates they generate, a key property to ensure the reliability of our variational classification scheme.
Iii-a General framework
As noted in Section II, is a differentiable convex function and its gradient has Lipschitz constant , where is given by (6). Likewise, since is convex and continuous, is a closed convex set as a lower level set of . The principle of a splitting method is to use the constituents of the problems, here and , separately. In the problem at hand, it is natural to use the projection-gradient method to solve (15). This method, which is an instance of the proximal forward-backward algorithm , alternates a gradient step on the objective and a projection step onto the constraint set . It is applicable in the following setting, which captures Problem 1.
Let be a differentiable convex function such that is Lipschitz-continuous with constant , let be a convex function, let , and set . The problem is to
Let denote the projection operator onto the closed convex set . Given , a sequence of strictly positive parameters, and a sequence in modeling computational errors in the implementation of the projection operator , the projection-gradient algorithm for solving Problem 2 assumes the form
We derive at once from [14, Theorem 3.4(i)] the following convergence result, which guarantees the convergence of the iterates.
Theorem 1 states that the whole sequence of iterates converges to a solution. Using classical results on the asymptotic behavior of the projection-gradient method , we can complement this result with the following upper bound on the rate of convergence of the objective value.
In Theorem 1 suppose that . Then for some .
The implementation of (17) is straightforward except for the computation of . Indeed, is defined in (14) as the lower level set of a convex function, and no explicit formula exists for computing the projection onto such a set in general [3, Section 29.5]. Fortunately, Theorem 1 asserts that need not be computed exactly. Next, we provide an efficient algorithm to compute an approximate projection onto .
Iii-B Projection onto a lower level set
Let , let be a convex function, and let be such that
The objective is to compute iteratively the projection of onto . The principle of the algorithm is to replace this (usually intractable) projection by a sequence of projections onto simple outer approximations to consisting of the intersection of two affine half-spaces .
We first recall that is called a subgradient of at if [3, Chapter 16]
The set of all subgradients of at is denoted by . If is differentiable at , this set reduces to a single vector, namely the gradient . The projection of onto is characterized by
Given and in , define a closed affine half-space by
Note that and, if , is the closed affine half-space onto which the projection of is . According to (21), . The principle of the algorithm is as follows (see Fig. 1). At iteration , if , then and the algorithm terminates with . Indeed, since [10, Section 5.2] and is the projection of onto , we have . Hence , i.e., . Otherwise, one first computes the so-called subgradient projection of onto . Recall that, given , the subgradient projection of onto is [3, 7, 9]
As noted in , the closed half-space serves as an outer approximation to at iteration , i.e., ; moreover . Thus, since we have also seen that , we have
Let , , and be points in such that
Moreover, set , , , , , and . Then the projection of onto is
To sum up, the projection of onto the set of (19) will be performed by executing the following routine.
Let , let be a convex function, and let be such that . Then either (27) terminates in a finite number of iterations at or it generates an infinite sequence such that .
To obtain an implementable version of the conceptual algorithm (17), consider its th iteration and the computation of the approximate projection of onto using (27). We first initialize (27) with , and then execute only iterations of it. In doing so, we approximate the exact projection onto by the projection onto . The resulting error is . According to Theorem 1, this error must be controlled so as to yield overall a summable process. First, since is nonexpansive [3, Proposition 4.16], we have
Now suppose that (otherwise we are done). By convexity, is Lipschitz-continuous relative to compact sets [3, Corollary 8.41]. Therefore there exists such that . In addition, assuming that
, using standard error bounds on convex inequalities, there exists a constant such that
Thus, is suffices to take large enough so that, for instance, we have and for some , , and . This will guarantee that and therefore, by Theorem 1, the convergence of the sequence generated by the following algorithm to a solution to Problem 2.
Let us observe that, from a practical standpoint, we have found the above error analysis not to be required in our experiments since an almost exact projection is actually obtainable with a few iterations of (27). For instance, numerical simulations (see Fig. 2) on the synthetic data set described in Section IV-A show that (27) yields in about iterations a point very close to the exact projection of onto . Note that the number of iterations of (27) does not depend on the dimension .
Remark 1 (multiple constraints)
We have presented above the case of a single constraint, since it is the setting employed in subsequent sections. However, the results of [10, Section 6.5] enable us to extend this approach to problems with constraints, see Appendix A.
Iii-C Application to Problem 1
A subgradient of at is , where
The th component of a subgradient of at is given by
The th component of a subgradient of at is given by
Iii-D Application to regression
A common approach in regression is to learn by employing the quadratic loss
is the largest singular value of the matrix, it suffices to change the definition of in (32) by
Iv Experimental evaluation
We illustrate the performance of the proposed constrained splitting method on both synthetic and real data sets.
Iv-a Synthetic data set
We first simulate a simple regulatory network in genomic described in . A genomic network is composed of regulators (transcription factors, cytokines, kinase, growth factors, etc.) and the genes they regulate. Our notation is as follows:
: number of samples.
: number of regulators.
: number of genes per regulator.
The entry of the matrix , composed of rows and columns, is as follows.
The th regulator of the th sample is
This defines for of the form .
The genes associated with
have a joint bivariate normal distribution with a correlation of
This defines .
The regression response is given by , where with .
In this example, we consider that genes regulated by the same regulators are activated and gene is inhibited. The true regressor is defined as
We consider that genes regulated by the same regulators are activated and genes are inhibited. The true regressor is defined as
This example is similar to Example 1, but we consider that genes regulated by the same regulators are activated and genes are inhibited. The true regressor is
Iv-B Breast cancer data set
We use the breast cancer data set , which consists of gene expression data for 8,141 genes in 295 breast cancer tumors (78 metastatic and 217 non-metastatic). In the time comparison evaluation, we select a subset of the 8141 genes (range 3000 to 7000) using a threshold on the mean of the genes. We use the network provided in  with pathways as graph constraints in our classifier. In biological applications, pathways are genes grouped according to their biological functions [8, 27]. Two genes are connected if they belong to the same pathway. Let be the subset of genes that are connected to gene . In this case, we have a subset of only 40,000 connected genes in . Note that we compute the subgradient (34) only on the subset of connected genes.
Iv-C Comparison between penalty method and our constrained method for classification
First, we compare with the penalty approach using glmnet MATLAB software  on the breast cancer data set described in Section IV-B. We tuned the number of path iterations for glmnet for different values of the feature dimension. The number of nonzero coefficients increases with . The glmnet method requires typically 200 path iterations or more (see Fig. 3).
Our classification implementation uses the logistic loss. Let be the surrogate sparsity constraint. Fig. 4 shows for different values of the feature dimension that the number of nonzero coefficients decreases monotonically with the number of iterations. Consequently, the sweep search over consists in stopping the iterations of the algorithm when reaches value specified a priori.
Our direct constrained strategy does not require the often heuristic search for meaningful and interpretable Lagrange multipliers. Moreover, we can improve processing time using sparse computing. Namely, at each iteration we compute scalar products using only the sparse sub-vector of nonzero values. We compare time processing using the breast cancer data set (samples, ) described in Section IV-B. We provide time comparison using a 2.5 GHz Macbook Pro with an i7 processor and Matlab software. We report time processing in Table I using glmnet software  and our method using either Matlab or a standard mex file. Moreover, since the vector is sparse, we provide mex-sparse and matlb-sparse times using sparse computing.
A potentially competitive alternative constrained optimization algorithms for solving the projection part of our constrained classification splitting algorithm is that described in . We plug the specific projection onto the ball algorithm into our splitting algorithm. We provide time comparison (in seconds) in Table II for classification for the breast cancer data set () described in Section IV-B.
|Matlab||Matlab sparse||ball |
Note that the most expensive part of our algorithm in terms of computation is the evaluation of the gradient. Although the projection onto the ball  is faster than our projection, our method is basically slower than the specific constrained method for dimension . However, our sparse implementation of scalar products is twice as fast. Moreover, since the complexity of our method relies on the computation of scalar products, it can be easily speed up using multicore CPU or Graphics Processing Unit (GPU) devices, while the speed up of the projection on the ball  using CPU or GPU is currently an open issue. In addition our method is more flexible since it can take into account more sophisticated constraints such as , , or any convex constraint. We evaluate classification performance using area under the ROC curve (AUC). The result of Table III show that our constrained method outperforms the penalty method by . Our constraint improves slightly the AUC by over the constrained method. We also observe a significant improvement of our constrained method over the penalty group Lasso approach discussed in . In addition, the main benefit of the constraint is to provide a set of connected genes which is more relevant for biological analysis than the individual genes selected by the constraint.
Iv-D Comparison of various constraints for regression
In biological applications, gene activation or inhibition are well known and summarized in the ingenuity pathway analysis (IPA) database . We introduce this biological a priori knowledge by replacing the constraint by
where if genes and are both activated or inhibited, and if gene is activated and gene inhibited. We compare the estimation of for Example 3 using versus the and constraint. For each fold, we estimate the regression vector on training samples. Then we evaluate on new testing samples. We evaluate regression using the mean square error (MSE) in the training set and the predictive mean square error (PMSE) in the test set. We use randomly half of the data for training and half for testing, and then we average the accuracy over 50 random folds.
We show in Fig. (a)a the true regression vector and, in Fig. (b)b, the estimation using the constraint for Example 3. In Fig. (c)c we show the results of the estimation with the constraint, and in Fig. (d)d with the constraint. We provide for the three examples the mean square error as a function of for (Fig. 7), (Fig. 8), and (Fig. 9).
The constraint outperforms both the and the constrained method. However, the selection of the parameter for constraint is more challenging.
We have used constrained optimization approaches to promote sparsity and feature grouping in classification and regression problems. To solve these problems, we have proposed a new efficient algorithm which alternates a gradient step on the data fidelity term and an approximate projection step onto the constraint set. We have also discussed the generalization to multiple constraints. Experiments on both synthetic and biological data show that our constrained approach outperforms penalty methods. Moreover, the formulation using the constraint outperforms those using the pairwise and the constraint.
Appendix A – The case of multiple constraints
Let be as in Problem 2 and, for every , let be convex, let , and let be such that . Consider the problem
In other words,