# Classification and regression using an outer approximation projection-gradient method

This paper deals with sparse feature selection and grouping for classification and regression. The classification or regression problems under consideration consists in minimizing a convex empirical risk function subject to an ℓ^1 constraint, a pairwise ℓ^∞ constraint, or a pairwise ℓ^1 constraint. Existing work, such as the Lasso formulation, has focused mainly on Lagrangian penalty approximations, which often require ad hoc or computationally expensive procedures to determine the penalization parameter. We depart from this approach and address the constrained problem directly via a splitting method. The structure of the method is that of the classical gradient-projection algorithm, which alternates a gradient step on the objective and a projection step onto the lower level set modeling the constraint. The novelty of our approach is that the projection step is implemented via an outer approximation scheme in which the constraint set is approximated by a sequence of simple convex sets consisting of the intersection of two half-spaces. Convergence of the iterates generated by the algorithm is established for a general smooth convex minimization problem with inequality constraints. Experiments on both synthetic and biological data show that our method outperforms penalty methods.

## Authors

• 6 publications
• 1 publication
• 3 publications
• 1 publication
• ### Frank-Wolfe Splitting via Augmented Lagrangian Method

Minimizing a function over an intersection of convex sets is an importan...
04/09/2018 ∙ by Gauthier Gidel, et al. ∙ 0

• ### Clustering with feature selection using alternating minimization, Application to computational biology

This paper deals with unsupervised clustering with feature selection. Th...
11/08/2017 ∙ by Cyprien Gilet, et al. ∙ 0

• ### Robust supervised classification and feature selection using a primal-dual method

This paper deals with supervised classification and feature selection in...
02/05/2019 ∙ by Michel Barlaud, et al. ∙ 0

• ### Convergence Analysis of Penalty Based Numerical Methods for Constrained Inequality Problems

This paper presents a general convergence theory of penalty based numeri...
12/16/2019 ∙ by Weimin Han, et al. ∙ 0

• ### Sparse Matrix-based Random Projection for Classification

As a typical dimensionality reduction technique, random projection can b...
12/12/2013 ∙ by Weizhi Lu, et al. ∙ 0

• ### Fast Spectral Unmixing based on Dykstra's Alternating Projection

This paper presents a fast spectral unmixing algorithm based on Dykstra'...
05/07/2015 ∙ by Qi Wei, et al. ∙ 0

• ### Log-barrier constrained CNNs

This study investigates imposing inequality constraints on the outputs o...
04/08/2019 ∙ by Hoel Kervadec, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

In many classification and regression problems, the objective is to select a sparse vector of relevant features. For example in biological applications, DNA microarray and new RNA-seq devices provide high dimensional gene expression (typically 20,000 genes). The challenge is to select the smallest number of genes (the so-called biomarkers) which are necessary to achieve accurate biological classification and prediction. A popular approach to recover sparse feature vectors (under a condition of mutual incoherence) is to solve a convex optimization problem involving a data fidelity term

and the norm [6, 17, 19, 34]. Recent Lasso penalty regularization methods take into account correlated data using either the pairwise penalty [24, 27, 35] or the pairwise penalty [5] (see also [20] for further developments). The sparsity or grouping constrained classification problem can be cast as the minimization of a smooth convex loss subject to an or a pairwise constraint, say . Most of the existing work has focused on Lagrangian penalty methods, which aim at solving the unconstrained problem of minimizing . Although, under proper qualification conditions, there is a formal equivalence between constrained and unconstrained Lagrangian formulations [3, Chapter 19], the exact Lagrange multiplier can seldom be computed easily, which leaves the properties of the resulting solutions loosely defined. The main contribution of the present paper is to propose an efficient splitting algorithm to solve the constrained formulation

 % \rm minimizeφ(w)⩽ηΦ(w) (1)

directly. As discussed in [11], the bound defining the constraint can often be determined from prior information on the type of problem at hand. Our splitting algorithm proceeds by alternating a gradient step on the smooth classification risk function and a projection onto the lower level set . The main focus is when models the , pairwise constraint, or pairwise constraint. The projection onto the lower level set is implemented via an outer projection procedure which consists of successive projections onto the intersection of two simple half-spaces. The remainder of the paper is organized as follows. Section II introduces the constrained optimization model. Section III presents our new splitting algorithm, which applies to any constrained smooth convex optimization problem. In particular we also discuss the application to regression problems. Section IV presents experiments on both synthetic and real classical biological and genomics data base.

## Ii Classification risk and convex constraints

### Ii-a Risk minimization

We assume that samples in are available. Typically , where is the dimension of the feature vector. Each sample is annotated with a label taking its value in

. The classification risk associated with a linear classifier parameterized by a vector

is given by

 Φ:Rd→R:w↦1mm∑i=1ϕ(yi⟨xi∣w⟩). (2)

We restrict our investigation to convex losses which satisfy the following assumption.

###### Assumption 1

Let be an increasing Lipschitz-continuous function which is antisymmetric with respect to the point , integrable, and differentiable at with . The loss is defined by

 (∀t∈R)ϕ(t)=−t+∫t−∞f(s)ds. (3)

The main advantage of this class of smooth losses is that it allows us to compute the posterior estimation

[4]. The function relates a prediction of a sample

to the posteriori probability for the class

via

 ˆP[Yi=+1 | xi]=f(⟨xi∣w⟩). (4)

This property will be used in Section IV to compute without any approximation the area under the ROC curve (AUC). The loss in Assumption 1 is convex, everywhere differentiable with a Lipschitz-continuous derivative, and it is twice differentiable at with . In turn, the function of (2) is convex and differentiable, and its gradient

 ∇Φ:w↦1mm∑i=1f(⟨xi∣w⟩)xi (5)

has Lipschitz constant

 β=f′(0)∑mi=1∥xi∥2m=ϕ′′(0)∑mi=1∥xi∥2m. (6)

Applications to classification often involve normalized features. In this case, (6) reduces to . Examples of functions which satisfy Assumption 1 include that induced by the function , which leads to the logistic loss , for which . Another example is the Matsusita loss [29]

 ϕ:t↦12(−t+√1+t2), (7)

which is induced by .

### Ii-B Sparsity model

In many applications, collecting a sufficient amount of features to perform prediction is a costly process. The challenge is therefore to select the smallest number of features (genes or biomarkers) necessary for an efficient classification and prediction. The problem can be cast as a constrained optimization problem, namely,

 \rm minimizew∈Rd∥w∥0⩽δΦ(w), (8)

where is the number of nonzero entries of . Since is not convex, (8) is usually intractable and an alternative approach is to use the norm as a surrogate, which yields the Lasso formulation [34]

 \rm minimizew∈Rd∥w∥1⩽ηΦ(w). (9)

It has been shown in the context of compressed sensing that under a so-called restricted isometry property, minimizing with the norm is tantamount to minimizing with the penalty in a sense made precise in [6].

### Ii-C Grouping model

Let us consider the graph of connected features . The basic idea is to introduce constraints on the coefficients for features and connected by an edge in the graph. In this paper we consider two approaches: directed acyclic graph and undirected graph. Fused Lasso [35] encourages the coefficients and of features and connected by an edge in the graph to be similar. We define the problem of minimizing under the directed acyclic graph constraint as

 \rm minimizew∈Rd∑(i,j)∈S|ωi−ωj|⩽ηΦ(w), (10)

for some suitable parameters . In the second, undirected graph, approach [5] one constrains the coefficients of features and connected by an edge using a pairwise constraint. The problem is to

 \rm minimizew∈Rd∑(i,j)∈Smax(|ωi|,|ωj|)⩽ηΦ(w). (11)

To approach the constrained problems (9) and (10), state of the art methods employ a penalized variant [18, 19, 21, 34]. In these Lagrangian approaches the objective is to minimize , where aims at controlling sparsity and grouping, and where the constraints are defined by one of the following (see (9), (10), and (11))

 ⎧⎪⎨⎪⎩φ1=∥⋅∥1φ2:w↦∑(i,j)∈Smax(|ωi|,|ωj|)φ3:w↦∑(i,j)∈S|ωi−ωj|. (12)

The main drawback of current penalty formulations resides in the cost associated with the reliable computation of the Lagrange multiplier using homotopy algorithms [18, 21, 22, 28]. The worst complexity case is [28], which is usually intractable on real data. Although experiments using homotopy algorithms suggest that the actual complexity is [28]

, the underlying path algorithm remains computationally expensive for high-dimensional data sets such as the genomic data set. To circumvent this computational issue, we propose a new general algorithm to solve either the sparse  (

9) or the grouping (10) constrained convex optimization problems directly.

### Ii-D Optimization model

Our classification minimization problem is formally cast as follows.

###### Problem 1

Suppose that satisfies Assumption 1 and let be convex. Set

 Φ:Rd→R:w↦1mm∑i=1ϕ(yi⟨xi∣w⟩) (13)

and

 (14)

and let be the Lipschitz constant of , as defined in (6). The problem is to

 \rm minimizew∈CΦ(w). (15)

In Section IV, we shall focus on the three instances of the function defined in (12). We assume throughout that there exists some such that is nonempty and bounded, which guarantees that (13) has at least one solution. In particular, this is true if or is coercive.

## Iii Splitting algorithm

In this section, we propose an algorithm for solving constrained classification problem (15). This algorithm fits in the general category of forward-backward splitting methods, which have been popular since their introduction in data processing problem in [14]; see also [12, 13, 30, 32, 33]. These methods offer flexible implementations with guaranteed convergence of the sequence of iterates they generate, a key property to ensure the reliability of our variational classification scheme.

### Iii-a General framework

As noted in Section II, is a differentiable convex function and its gradient has Lipschitz constant , where is given by (6). Likewise, since is convex and continuous, is a closed convex set as a lower level set of . The principle of a splitting method is to use the constituents of the problems, here and , separately. In the problem at hand, it is natural to use the projection-gradient method to solve (15). This method, which is an instance of the proximal forward-backward algorithm [14], alternates a gradient step on the objective and a projection step onto the constraint set . It is applicable in the following setting, which captures Problem 1.

###### Problem 2

Let be a differentiable convex function such that is Lipschitz-continuous with constant , let be a convex function, let , and set . The problem is to

 \rm minimizew∈CΦ(w). (16)

Let denote the projection operator onto the closed convex set . Given , a sequence of strictly positive parameters, and a sequence in modeling computational errors in the implementation of the projection operator , the projection-gradient algorithm for solving Problem 2 assumes the form

 forn=0,1,…⌊vn=wn−γn∇Φ(wn)wn+1=PC(vn)+an. (17)

We derive at once from [14, Theorem 3.4(i)] the following convergence result, which guarantees the convergence of the iterates.

###### Theorem 1

Suppose that Problem 2 has at least one solution, let , let be a sequence in , and let be a sequence in such that

 ∑n∈N∥an∥<+∞,infn∈Nγn>0,andsupn∈Nγn<2β. (18)

Then the sequence generated by (17) converges to a solution to Problem 2.

Theorem 1 states that the whole sequence of iterates converges to a solution. Using classical results on the asymptotic behavior of the projection-gradient method [25], we can complement this result with the following upper bound on the rate of convergence of the objective value.

###### Proposition 1

In Theorem 1 suppose that . Then for some .

The implementation of (17) is straightforward except for the computation of . Indeed, is defined in (14) as the lower level set of a convex function, and no explicit formula exists for computing the projection onto such a set in general [3, Section 29.5]. Fortunately, Theorem 1 asserts that need not be computed exactly. Next, we provide an efficient algorithm to compute an approximate projection onto .

### Iii-B Projection onto a lower level set

Let , let be a convex function, and let be such that

 C={p∈Rd ∣∣ φ(p)⩽η}≠∅. (19)

The objective is to compute iteratively the projection of onto . The principle of the algorithm is to replace this (usually intractable) projection by a sequence of projections onto simple outer approximations to consisting of the intersection of two affine half-spaces [10].

We first recall that is called a subgradient of at if [3, Chapter 16]

 (∀y∈Rd)⟨y−p∣s⟩+φ(p)⩽φ(y). (20)

The set of all subgradients of at is denoted by . If is differentiable at , this set reduces to a single vector, namely the gradient . The projection of onto is characterized by

 {PC(p0)∈C(∀p∈C)⟨p−PC(p0)∣p0−PC(p0)⟩⩽0. (21)

Given and in , define a closed affine half-space by

 H(x,y)={p∈Rd ∣∣ ⟨p−y∣x−y⟩⩽0}. (22)

Note that and, if , is the closed affine half-space onto which the projection of is . According to (21), . The principle of the algorithm is as follows (see Fig. 1). At iteration , if , then and the algorithm terminates with . Indeed, since [10, Section 5.2] and is the projection of onto , we have . Hence , i.e., . Otherwise, one first computes the so-called subgradient projection of onto . Recall that, given , the subgradient projection of onto is [3, 7, 9]

 pk+1/2=⎧⎪⎨⎪⎩pk+η−φ(pk)∥sk∥2skifφ(pk)>ηpkifφ(pk)⩽η. (23)

As noted in [9], the closed half-space serves as an outer approximation to at iteration , i.e., ; moreover . Thus, since we have also seen that , we have

 C⊂Ck,whereCk=H(p0,pk)∩H(pk,pk+1/2). (24)

The update is computed as the projection of onto the outer approximation . As the following lemma from [23] (see also [3, Corollary 29.25]) shows, this computation is straightforward.

###### Lemma 1

Let , , and be points in such that

 H(x,y)∩H(y,z)≠∅. (25)

Moreover, set , , , , , and . Then the projection of onto is

 Q(x,y,z)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩zifρ=0andχ⩾0x−(1+χν)bifρ>0andχν⩾ρy+νρ(χa−μb)ifρ>0andχν<ρ. (26)

To sum up, the projection of onto the set of (19) will be performed by executing the following routine.

 fork=0,1,…⎢⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ifφ(pk)⩽η⌊terminate.ζk=η−φ(pk)sk∈∂φ(pk)pk+1/2=pk+ζksk/∥sk∥2pk+1=Q(p0,pk,pk+1/2). (27)

The next result from [10, Section 6.5] guarantees the convergence of the sequence generated by (27) to the desired point.

###### Proposition 2

Let , let be a convex function, and let be such that . Then either (27) terminates in a finite number of iterations at or it generates an infinite sequence such that .

To obtain an implementable version of the conceptual algorithm (17), consider its th iteration and the computation of the approximate projection of onto using (27). We first initialize (27) with , and then execute only iterations of it. In doing so, we approximate the exact projection onto by the projection onto . The resulting error is . According to Theorem 1, this error must be controlled so as to yield overall a summable process. First, since is nonexpansive [3, Proposition 4.16], we have

 ∥PC(p0)−PC(pKn)∥⩽∥p0−pKn∥→0. (28)

Now suppose that (otherwise we are done). By convexity, is Lipschitz-continuous relative to compact sets [3, Corollary 8.41]. Therefore there exists such that . In addition, assuming that

, using standard error bounds on convex inequalities

[26], there exists a constant such that

 (29)

Thus,

 ∥an∥ =∥PC(p0)−pKn∥ ⩽∥PC(p0)−PC(pKn)∥+∥PC(pKn)−pKn∥ ⩽∥p0−pKn∥+ξ(φ(pKn)−η). (30)

Thus, is suffices to take large enough so that, for instance, we have and for some , , and . This will guarantee that and therefore, by Theorem 1, the convergence of the sequence generated by the following algorithm to a solution to Problem 2.

 forn=0,1,…⎢⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣vn=wn−γn∇Φ(wn)p0=vnfork=0,1,…,Kn−1⎢⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ζk=η−φ(pk)ifζk⩾0⌊terminate.sk∈∂φ(pk)pk+1/2=pk+ζksk/∥sk∥2pk+1=Q(p0,pk,pk+1/2)wn+1=pKn. (31)

Let us observe that, from a practical standpoint, we have found the above error analysis not to be required in our experiments since an almost exact projection is actually obtainable with a few iterations of (27). For instance, numerical simulations (see Fig. 2) on the synthetic data set described in Section IV-A show that (27) yields in about iterations a point very close to the exact projection of onto . Note that the number of iterations of (27) does not depend on the dimension .

###### Remark 1 (multiple constraints)

We have presented above the case of a single constraint, since it is the setting employed in subsequent sections. However, the results of [10, Section 6.5] enable us to extend this approach to problems with constraints, see Appendix A.

### Iii-C Application to Problem 1

It follows from (5), (26), and (27), that (31) for the classification problem can be written explicitly as follows, where is an arbitrarily small number in and where is given by (6).

 forn=0,1,…⎢⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣γn∈[ε,(2−ε)/β]vn=wn−γnmm∑i=1yiϕ′(yi⟨xi∣wn⟩)xip0=vnfork=0,1,…,Kn−1⎢⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ζk=η−φ(pk)ifζk⩾0⌊terminate.sk∈∂φ(pk)pk+1/2=pk+ζksk/∥sk∥2χk=⟨p0−pk∣pk−pk+1/2⟩μk=∥p0−pk∥2νk=∥pk−pk+1/2∥2ρk=μkνk−χ2kifρk=0andχk⩾0⌊pk+1=pk+1/2ifρk>0andχkνk⩾ρk⌊pk+1=p0+(1+χkνk)(pk+1/2−pk)ifρk>0andχkνk<ρk⌊pk+1=pk+νkρk(χk(p0−pk)+μk(pk+1/2−pk))wn+1=pKn. (32)

A subgradient of at is , where

 sign:ξ↦⎧⎨⎩1ifξ>00ifξ=0−1ifξ<0. (33)

The th component of a subgradient of at is given by

 ∑(i,j)∈S{sign(ξi)if|ξi|⩾|ξj|0otherwise. (34)

The th component of a subgradient of at is given by

 ∑(i,j)∈S{sign(ξi−ξj)ifξi≠ξj0otherwise. (35)

### Iii-D Application to regression

A common approach in regression is to learn by employing the quadratic loss

 Ψ:Rd→R:w↦12mm∑i=1∣∣⟨xi∣w⟩−yi∣∣2 (36)

instead of the function of (13) in Problem 2. Since is convex and has a Lipschitz-continuous gradient with constant , where

is the largest singular value of the matrix

, it suffices to change the definition of in (32) by

 vn=wn−γnmm∑i=1(⟨xi∣wn⟩−yi)xi. (37)

## Iv Experimental evaluation

We illustrate the performance of the proposed constrained splitting method on both synthetic and real data sets.

### Iv-a Synthetic data set

We first simulate a simple regulatory network in genomic described in [27]. A genomic network is composed of regulators (transcription factors, cytokines, kinase, growth factors, etc.) and the genes they regulate. Our notation is as follows:

• : number of samples.

• : number of regulators.

• : number of genes per regulator.

• .

The entry of the matrix , composed of rows and columns, is as follows.

1. The th regulator of the th sample is

 ξi,regr=ξi,Ng(r−1)+r=ξi,r(Ng+1)−Ng∼N(0,1).

This defines for of the form .

2. The genes associated with

have a joint bivariate normal distribution with a correlation of

 ξi,r(Ng+1)−Ng+k∼N(ϱξi,regr,1−ϱ2).

This defines .

The regression response is given by , where with .

###### Example 1

In this example, we consider that genes regulated by the same regulators are activated and gene is inhibited. The true regressor is defined as

 w=(5,5√10,…9,−5√10,…1,−5,−5√10,…9,5√10,…1,3,3√10,…9,−3√10,…1,−3,−3√10,…9,3√10,…1,0,…,0).
###### Example 2

We consider that genes regulated by the same regulators are activated and genes are inhibited. The true regressor is defined as

 w=(5,5√10,…8,−5√10,…2,−5,−5√10,…8,5√10,…2,3,3√10,…8,−3√10,…2,−3,−3√10,…8,3√10,…2,0,…,0).
###### Example 3

This example is similar to Example 1, but we consider that genes regulated by the same regulators are activated and genes are inhibited. The true regressor is

 w=(5,5√10,…7,−5√10,…3,−5,−5√10,…7,5√10,…3,3,3√10,…7,−3√10,…3,−3,−3√10,…7,3√10,…3,0,…,0).

### Iv-B Breast cancer data set

We use the breast cancer data set [36], which consists of gene expression data for 8,141 genes in 295 breast cancer tumors (78 metastatic and 217 non-metastatic). In the time comparison evaluation, we select a subset of the 8141 genes (range 3000 to 7000) using a threshold on the mean of the genes. We use the network provided in [8] with pathways as graph constraints in our classifier. In biological applications, pathways are genes grouped according to their biological functions [8, 27]. Two genes are connected if they belong to the same pathway. Let be the subset of genes that are connected to gene . In this case, we have a subset of only 40,000 connected genes in . Note that we compute the subgradient (34) only on the subset of connected genes.

### Iv-C Comparison between penalty method and our ℓ1 constrained method for classification

First, we compare with the penalty approach using glmnet MATLAB software [31] on the breast cancer data set described in Section IV-B. We tuned the number of path iterations for glmnet for different values of the feature dimension. The number of nonzero coefficients increases with . The glmnet method requires typically 200 path iterations or more (see Fig. 3).

Our classification implementation uses the logistic loss. Let be the surrogate sparsity constraint. Fig. 4 shows for different values of the feature dimension that the number of nonzero coefficients decreases monotonically with the number of iterations. Consequently, the sweep search over consists in stopping the iterations of the algorithm when reaches value specified a priori.

Our direct constrained strategy does not require the often heuristic search for meaningful and interpretable Lagrange multipliers. Moreover, we can improve processing time using sparse computing. Namely, at each iteration we compute scalar products using only the sparse sub-vector of nonzero values. We compare time processing using the breast cancer data set (

samples, ) described in Section IV-B. We provide time comparison using a 2.5 GHz Macbook Pro with an i7 processor and Matlab software. We report time processing in Table I using glmnet software [31] and our method using either Matlab or a standard mex file. Moreover, since the vector is sparse, we provide mex-sparse and matlb-sparse times using sparse computing.

Fig. 5 shows that our constrained method is ten times faster than glmnet [31]. A numerical experiment is available in [1].

A potentially competitive alternative constrained optimization algorithms for solving the projection part of our constrained classification splitting algorithm is that described in [15]. We plug the specific projection onto the ball algorithm into our splitting algorithm. We provide time comparison (in seconds) in Table II for classification for the breast cancer data set () described in Section IV-B.

Note that the most expensive part of our algorithm in terms of computation is the evaluation of the gradient. Although the projection onto the ball [15] is faster than our projection, our method is basically slower than the specific constrained method for dimension . However, our sparse implementation of scalar products is twice as fast. Moreover, since the complexity of our method relies on the computation of scalar products, it can be easily speed up using multicore CPU or Graphics Processing Unit (GPU) devices, while the speed up of the projection on the ball [15] using CPU or GPU is currently an open issue. In addition our method is more flexible since it can take into account more sophisticated constraints such as , , or any convex constraint. We evaluate classification performance using area under the ROC curve (AUC). The result of Table III show that our constrained method outperforms the penalty method by . Our constraint improves slightly the AUC by over the constrained method. We also observe a significant improvement of our constrained method over the penalty group Lasso approach discussed in [24]. In addition, the main benefit of the constraint is to provide a set of connected genes which is more relevant for biological analysis than the individual genes selected by the constraint.

### Iv-D Comparison of various constraints for regression

In biological applications, gene activation or inhibition are well known and summarized in the ingenuity pathway analysis (IPA) database [2]. We introduce this biological a priori knowledge by replacing the constraint by

 φ4:w↦∑(i,j)∈S|ωi−aijωj|, (38)

where if genes and are both activated or inhibited, and if gene is activated and gene inhibited. We compare the estimation of for Example 3 using versus the and constraint. For each fold, we estimate the regression vector on training samples. Then we evaluate on new testing samples. We evaluate regression using the mean square error (MSE) in the training set and the predictive mean square error (PMSE) in the test set. We use randomly half of the data for training and half for testing, and then we average the accuracy over 50 random folds.

We show in Fig. (a)a the true regression vector and, in Fig. (b)b, the estimation using the constraint for Example 3. In Fig. (c)c we show the results of the estimation with the constraint, and in Fig. (d)d with the constraint. We provide for the three examples the mean square error as a function of for (Fig. 7), (Fig. 8), and (Fig. 9).

We report for Example 2 in Fig. 10 the estimation of the mean square error in the training set as a function of the number of training samples for the , , and constraint.

The constraint outperforms both the and the constrained method. However, the selection of the parameter for constraint is more challenging.

## V Conclusion

We have used constrained optimization approaches to promote sparsity and feature grouping in classification and regression problems. To solve these problems, we have proposed a new efficient algorithm which alternates a gradient step on the data fidelity term and an approximate projection step onto the constraint set. We have also discussed the generalization to multiple constraints. Experiments on both synthetic and biological data show that our constrained approach outperforms penalty methods. Moreover, the formulation using the constraint outperforms those using the pairwise and the constraint.

## Appendix A – The case of multiple constraints

Let be as in Problem 2 and, for every , let be convex, let , and let be such that . Consider the problem

 \rm minimize% φ1(w)⩽η1    ⋮φp(w)⩽ηpΦ(w). (A1)

In other words,