Efficient First Order Methods for Linear Composite Regularizers

04/07/2011
by   Andreas Argyriou, et al.
0

A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function ω with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function ω is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused Lasso and tree structured Group Lasso.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/25/2013

On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learnin...
06/26/2011

A General Framework for Structured Sparsity via Proximal Optimization

We study a generalized framework for structured sparsity. It extends the...
08/17/2011

Structured Sparsity and Generalization

We present a data dependent generalization bound for a large class of re...
07/31/2018

Adaptive Non-Parametric Regression With the K-NN Fused Lasso

The fused lasso, also known as total-variation denoising, is a locally-a...
08/24/2021

Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data

In this paper, we propose an adaptive group Lasso deep neural network fo...
12/14/2019

A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constr...
06/02/2021

Smooth Bilevel Programming for Sparse Regularization

Iteratively reweighted least square (IRLS) is a popular approach to solv...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we study supervised learning methods which are based on the optimization problem

(1.1)

where the function

measures the fit of a vector

to available training data and is a penalty term or regularizer which encourages certain types of solutions. More precisely we let , where is an error function, is vector of measurements and a matrix, whose rows are the input vectors. This class of regularization methods arise in machine learning, signal processing and statistics and have a wide range of applications.

Different choices of the error function and the penalty function correspond to specific methods. In this paper, we are interested in solving problem (1.1) when is a strongly smooth convex function (such as the square error ) and the penalty function is obtained as the composition of a “simple” function with a linear transformation , that is,

(1.2)

where is a prescribed matrix and is a nondifferentiable convex function on . The class of regularizers (1.2) includes a plethora of methods, depending on the choice of the function and of matrix . Our motivation for studying this class of penalty functions arises from sparsity-inducing regularization methods which consider to be either the norm or a mixed - norm. When

is the identity matrix and

, the latter case corresponds to the well-known Group Lasso method [36], for which well studied optimization techniques are available. Other choices of the matrix give rise to different kinds of Group Lasso with overlapping groups [12, 38], which have proved to be effective in modeling structured sparse regression problems. Further examples can be obtained considering composition with the norm (e.g. this includes the Fused Lasso penalty function [32] and other total variation methods [21]) as well as composition with orthogonally invariant norms, which are relevant, for example, in the context of multi-task learning [2].

A common approach to solve many optimization problems of the general form (1.1) is via proximal methods. These are first-order iterative methods, whose computational cost per iteration is comparable to gradient descent. In some problems in which has a simple enough form, they can be combined with acceleration techniques [3, 26, 28, 33, 34], to yield significant gains in the number of iterations required to reach a certain approximation accuracy of the minimal value. The essential step of proximal methods requires the computation of the proximity operator of function (see Definition 2.1 below). In certain cases of practical importance, this operator admits a closed form, which makes proximal methods appealing to use. However, in the general case (1.2) the proximity operator may not be easily computable. We are aware of techniques to compute this operator for only some specific choices of the function and the matrix . Most related to our work are recent papers for Group Lasso with overlap [17] and Fused Lasso [19]. See also [1, 3, 14, 20, 24] for other optimization methods for structured sparsity.

The main contribution of this paper is a general technique to compute the proximity operator of the composite regularizer (1.2) from the solution of a certain fixed point problem, which depends on the proximity operator of the function and the matrix . This fixed point problem can be solved by a simple and efficient iterative scheme when the proximity operator of has a closed form or can be computed in a finite number of steps. When is a strongly smooth function, the above result can be used together with Nesterov’s accelerated method [26, 28] to provide an efficient first-order method for solving the optimization problem (1.1). Thus, our technique allows for the application of proximal methods on a much wider class of optimization problems than is currently possible. Our technique is both more general than current approaches and also, as we argue with numerical simulations, computationally efficient. In particular, we will demonstrate that our method outperforms state of the art methods for overlapping Group Lasso and matches optimal methods for the Fused Lasso and tree structured Group Lasso.

The paper is organized as follows. In Section 2, we review the notion of proximity operator and useful facts from fixed point theory. In Section 3, we discuss some examples of composite functions of the form (1.2) which are valuable in applications. In Section 4, we present our technique to compute the proximity operator for a composite regularizer of the form (1.2) and then an algorithm to solve the associated optimization problem (1.1). In Section 5, we report our numerical experience with this method.

2 Background

We denote by the Euclidean inner product on and let be the induced norm. If , for every we denote by the vector , where, for every integer , we use as a shorthand for the set . For every , we define the norm of as .

The proximity operator on a Hilbert space was introduced by Moreau in [22, 23].

Definition 2.1.

Let be a real valued convex function on . The proximity operator of is defined, for every by

(2.1)

The proximity operator is well defined, because the above minimum exists and is unique.

Recall that the subdifferential of a convex function at is defined as

The subdifferential is a nonempty compact and convex set. Moreover, if is differentiable at then its subdifferential at consists only of the gradient of at . The next proposition establishes a relationship between the proximity operator and the subdifferential of – see, for example, [21, Prop. 2.6] for a proof.

Proposition 2.1.

If is a convex function on and then

(2.2)

We proceed to discuss some examples of functions and the corresponding proximity operators.

If , where is a positive parameter, we have that

(2.3)

where the function is defined, for every , as . This fact follows immediately from the optimality condition of the optimization problem (2.1). Using the above equation, we may also compute the proximity map of a multiple of the norm, namely the case that , where . Indeed, for every , there exists a value of , depending only on and , such that the optimization problem (2.1) for equals to the solution of the same problem for . Hence the proximity map of the norm can be computed by (2.3) together with a simple line search. The cases that are simpler, see e.g. [7]. For we obtain the well-known soft-thresholding operator, namely

(2.4)

where, for every , we define if and zero otherwise; when we have that

(2.5)

In our last example, we consider the norm, which is defined, for every as . We have that

where is the -th largest value of the components of the vector and is the largest integer such that . For a proof of the above formula, see, for example [9, Sec. 5.4].

Finally, we recall some basic facts about fixed point theory which are useful for our study. For more information on the material presented here, we refer the reader to [37].

A mapping is called strictly non-expansive (or contractive) if there exists such that, for every , . If the above inequality holds for , the mapping is called nonexpansive. As noted in [7, Lemma 2.4], both and are nonexpansive.

We say that is a fixed point of a mapping if . The Picard iterates , starting at are defined by the recursive equation . It is a well-known fact that, if is strictly nonexpansive then has a unique fixed point and . However, this result fails if is nonexpansive. We end this section by stating the main tool which we use to find a fixed point of a nonexpansive mapping .

Theorem 2.1.

(Opial -average theorem [30]) Let be a nonexpansive mapping, which has at least one fixed point and let . Then, for every , the Picard iterates of converge to a fixed point of .

3 Examples of Composite Functions

In this section, we show that several examples of penalty functions which have appeared in the literature fall within the class of linear composite functions (1.2).

We define for every , and , the restriction of the vector to the index set as . Our first example considers the Group Lasso penalty function, which is defined as

(3.1)

where are prescribed subsets of (also called the “groups”) such that . The standard Group Lasso penalty (see e.g. [36]) corresponds to the case that the collection of groups forms a partition of the index set , that is, the groups do not overlap. In this case, the optimization problem (2.1) for decomposes as the sum of separate problems and the proximity operator is readily obtained by applying the formula (2.5) to each group separately. In many cases of interest, however, the groups overlap and the proximity operator cannot be easily computed.

Note that the function (3.1) is of the form (1.2). We let , and define, for every , , where, for every we let . Moreover, we choose , where is a matrix defined as

where for every and , we denote by the -th largest integer in .

The second example concerns the Fused Lasso [32], which considers the penalty function . It immediately follows that this function falls into the class (1.2) if we choose to be the norm and the first order divided difference matrix

(3.2)

The intuition behind the Fused Lasso is that it favors vectors which do not vary much across contiguous components. Further extensions of this case may be obtained by choosing to be the incidence matrix of a graph, a setting which is relevant for example in online learning over graphs [11]. Other related examples include the anisotropic total variation, see for example, [21].

The next example considers composition with orthogonally invariant (OI) norms. Specifically, we choose a symmetric gauge function , that is, a norm , which is both absolute and invariant under permutations [35] and define the function , at by the formula

where ,

is the vector formed by the singular values of matrix

, in non-increasing order. An example of OI-norm are Schatten -norms, which correspond to the case that is the -norm. The next proposition provides a formula for the proximity operator of an OI-norm. The proof is based on an inequality by von Neumann [35], sometimes called von Neumann’s trace theorem or Ky Fan’s inequality.

Proposition 3.1.

With the above notation, it holds that

where and and are the matrices formed by the left and right singular vectors of , respectively.

Proof.

The proof is based on an inequality by von Neumann [35], sometimes called von Neumann’s trace theorem or Ky Fan’s inequality. It states that , with equality if and only if and share the same ordered system of singular vectors. Note that

and the equality holds if and only if . Consequently, we have that

To conclude the proof we need to show that has the same ordering of , that is, is non-increasing. Suppose on the contrary that there exists , , such that . Let be the vector obtained by flipping the -th and -th components of . A direct computation gives

Since the left hand side of the above equation is positive, this leads to a contradiction. ∎

We can compose an OI-norm with a linear transformation , this time between two spaces of matrices, obtaining yet another subclass of penalty functions of the form (1.2). This setting is relevant in the context of multi-task learning. For example [10] chooses to be the trace or nuclear norm and considers a specific linear transformation which model task relatedness, namely, that , where is the vector all of whose components are equal to one.

4 Fixed Point Algorithms Based on Proximity Operators

We now propose optimization approaches which use fixed point algorithms for nonsmooth problems. We shall focus on problem (1.1) under the assumption (1.2). We assume that is a strongly smooth convex function, that is, is Lipschitz continuous with constant , and is a nondifferentiable convex function. A typical class of such problems occurs in regularization methods where corresponds to a data error term with, say, the square loss. Our approach builds on proximal methods and uses fixed point (also known as Picard) iterations for numerically computing the proximity operator.

4.1 Computation of a Generalized Proximity Operator with a Fixed Point Method

As the basic building block of our methods, we consider the optimization problem (1.1) in the special case when is a quadratic function, that is,

(4.1)

where is a given vector in and a positive definite matrix.

Recall the proximity operator in Definition 2.1. Under the assumption that we can explicitly or in a finite number of steps compute the proximity operator of , our aim is to develop an algorithm for evaluating a minimizer of problem (4.1). We describe the algorithm for a generic Hessian , as it can be applied in various contexts. For example, it could lead to a second-order method for solving (1.1), which will be the topic of future work. In this paper, we will apply the technique to the task of evaluating .

First, we observe that the minimizer of (4.1) exists and is unique. Let us call this minimizer . Similar to Proposition 2.1, we have the following proposition.

Proposition 4.1.

If is a convex function on , a positive definite matrix and then is the solution of problem (4.1) if and only if

(4.2)

The subdifferential appearing in the inclusion (4.2

) can be expressed with the chain rule (see, e.g.

[6]), which gives the formula

(4.3)

Combining equations (4.2) and (4.3) yields the fact that

(4.4)

This inclusion along with Proposition 2.1 allows us to express in terms of the proximity operator of . To formulate our observation we introduce the affine transformation defined, for fixed , , at by

(4.5)

and the operator

(4.6)
Theorem 4.1.

If is a convex function on , , , is a positive number and is the minimizer of (4.1) then

(4.7)

if and only if is a fixed point of .

Proof.

From (4.4) we conclude that is characterized by the fact that , where is a vector in the set . Thus it follows that . Using Proposition 2.1 we conclude that

(4.8)

Adding and subtracting on the left hand side and rearranging the terms we see that is a fixed point of .

Conversely, if is a fixed point of , then equation (4.8) holds. Using again Proposition 2.1 and the chain rule (4.3), we conclude that

(4.9)

Proposition 4.1 together with the above inclusion now implies that is the minimizer of (4.1). ∎

Since the operator is nonexpansive [7, Lemma 2.1], then

We conclude that the mapping is nonexpansive if the spectral norm of the matrix is not greater than one. Let us denote by

, the eigenvalues of matrix

. We see that is nonexpansive provided that , that is if , where is the spectral norm of . In this case we can appeal to Opial’s Theorem 2.1 to find a fixed point of .

Note that if, for every , , that is, the matrix is invertible, then the mapping is strictly nonexpansive when . In this case, the Picard iterates of converge to the unique fixed point of , without the need to use Opial’s Theorem.

We end this section by noting that, when , the above theorem provides an algorithm for computing the proximity operator of .

Corollary 4.1.

Let be a convex function on , , , a positive number and define the mapping . Then

(4.11)

if and only if is a fixed point of .

Thus, a fixed point iterative scheme like the above one can be used as part of any proximal method when the regularizer has the form (1.2).

4.2 Accelerated First-Order Methods

Corollary 4.1 motivates a general proximal numerical approach to solving problem (1.1) (Algorithm 1). Recall that is the Lipschitz constant of . The idea behind proximal methods – see [7, 4, 28, 33, 34]

and references therein – is to update the current estimate of the solution

using the proximity operator. This is equivalent to replacing with its linear approximation around a point specific to iteration . The point may depend on the current and previous estimates of the solution , the simplest and most common update rule being .

  
  for t=1,2,… do
     Compute   by the Picard-Opial process
     Update as a function of
  end for
Algorithm 1 Proximal & fixed point algorithm.

In particular, in this paper we focus on combining Picard iterations with accelerated first-order methods proposed by Nesterov [27, 28]. These methods use an update of a specific type, which requires two levels of memory of . Such a scheme has the property of a quadratic decay in terms of the iteration count, that is, the distance of the objective from the minimal value is after iterations. This rate of convergence is optimal for a first order method in the sense of the algorithmic model of [25].

It is important to note that other methods may achieve faster rates, at least under certain conditions. For example, interior point methods [29] or iterated reweighted least squares [8, 31, 1] have been applied successfully to nonsmooth convex problems. However, the former require the Hessian and typically have high cost per iteration. The latter require solving linear systems at each iteration. Accelerated methods, on the other hand, have a lower cost per iteration and scale to larger problem sizes. Moreover, in applications where some type of thresholding operator is involved – for example, the Lasso (2.4) – the zeros in the solution are exact, which may be desirable.

Since their introduction, accelerated methods have quickly become popular in various areas of applications, including machine learning, see, for example, [24, 15, 17, 13] and references therein. However, their applicability has been restricted by the fact that they require exact computation of the proximity operator. Only then is the quadratic convergence rate known to hold, and thus methods using numerical computation of the proximity operator are not guaranteed to exhibit this rate. What we show here, is how to further extend the scope of accelerated methods and that, empirically at least, these new methods outperform current methods while matching the performance of optimal methods.

In Algorithm 2 we describe a version of accelerated methods influenced by [33, 34]. Nesterov’s insight was that an appropriate update of which uses two levels of memory achieves the rate. Specifically, the optimal update is where the sequence is defined by and the recursive equation

(4.12)

We have adapted [33, Algorithm 2] (equivalent to FISTA [4]) by computing the proximity operator of using the Picard-Opial process described in Section 4.1. We rephrased the algorithm using the sequence for numerical stability. At each iteration, the map is defined by

and as in (4.6). By Theorem 4.1, the fixed point process combined with the update are equivalent to .

  
  for t=1,2,… do
     Compute a fixed point of by Picard-Opial
     
     
  end for
Algorithm 2 Accelerated & fixed point algorithm.

5 Numerical Simulations

We have evaluated the efficiency of our method with simulations on different nonsmooth learning problems. One important aim of the experiments is to demonstrate improvement over a state of the art suite of methods (SLEP) [16] in the cases when the proximity operator is not exactly computable.

An example of such cases which we considered in Section 5.1 is the Group Lasso with overlapping groups. An algorithm for computation of the proximity operator in a finite number of steps is known only in the special case of hierarchy-induced groups [13]. In other cases such as groups induced by directed acyclic graphs [38] or more complicated sets of groups, the best known theoretical rate for a first-order method is . We demonstrate that such a method can be improved.

Moreover, in Section 5.2 we report efficient convergence in the case of a composite penalty used for graph prediction [11]. In this case, matrix is the incidence matrix of a graph and the penalty is , where is the set of edges. Most work we are aware of for the composite penalty applies to the special cases of total variation [3] or Fused lasso [19], in which has a simple structure. A recent method for the general case [5] which builds on Nesterov’s smoothing technique [27] does not have publicly available software yet.

Another advantage of Algorithm 2 which we highlight is the high efficiency of Picard iterations for computing different proximity operators. This requires only a small number of iterations regardless of the size of the problem. We also report a roughly linear scalability with respect to the dimensionality of the problem, which shows that our methodology can be applied to large scale problems.

In the following simulations, we have chosen the parameter from Opial’s theorem . The parameter was set equal to , where and are the largest and smallest eigenvalues, respectively, of . We have focused exclusively on the case of the square loss and we have computed

using singular value decomposition (if this were not possible, a Frobenius estimate could be used). Finally, the implementation ran on a 16GB memory dual core Intel machine. The Matlab code is available at

http://ttic.uchicago.edu/argyriou/code/
index.html
.

Figure 1: Objective function vs. iteration for the overlapping groups data (). Note that Picard-Nesterov terminates earlier within .
Figure 2: difference of successive Picard iterates vs. Picard iteration for the overlapping groups data ().

5.1 Overlapping Groups

In the first simulation we considered a synthetic data set which involves a fairly simple group topology which, however, cannot be embedded as a hierarchy. We generated data , with

from a uniform distribution and normalized the matrix. The target vector

was also generated randomly so that only of its components are nonzero. The groups used in the regularizer – see eq. (3.1) – are: , , , , , , , , , , .

That is, the first groups form a chain, the next groups have a common element with one of the first groups and the rest have no overlaps. An issue with overlapping group norms is the coefficients assigned to each group (see [12] for a discussion). We chose to use a coefficient of for every group and compensate by normalizing each component of according to the number of groups in which it appears (this of course can only be done in a synthetic setting like this). The outputs were then generated as

with zero mean Gaussian noise of standard deviation

.

Figure 3: Average measures vs. dimensionality for the overlapping groups data. Top: number of iterations. Bottom: CPU time. Note that this time can be reduced to a fraction with a C implementation.
Figure 4: Objective function vs. iteration for the hierarchical overlapping groups.

We used a regularization parameter equal to . We ran the algorithm for , with random data sets for each value of , and compared its efficiency with SLEP. The solutions found recover the correct pattern without exact zeros due to the regularization. Figure 1 shows the number of iterations in Algorithm 2 needed for convergence in objective value within . SLEP was run until the same objective value was reached. We conclude that we outperform SLEP’s method. Figure 2 demonstrates the efficiency of the inner computation of the proximity map at one iteration of the algorithm. Just a few Picard iterations are required for convergence. The plots for different are indistinguishable.

Similar conclusions can be drawn from the plots in Figure 3, where average counts of iterations and CPU time are shown for each value of . We see that the number of iterations depends almost linearly on dimensionality and that SLEP requires an order of magnitude more iterations – which grow at a higher rate. Note also that the cost per iteration is comparable between the two methods. We also observed that computation of the proximity map is insensitive to the size of the problem (it only requires iterations for all ). Finally, we report that CPU time grows linearly with dimensionality. To remove various overheads this estimate was obtained from Matlab’s profiling statistics for the low-level functions called. A comparison with SLEP is meaningless since the latter is a C implementation.

Besides outperforming the method, we also show that the Picard-Nesterov approach matches SLEP’s method for the tree structured Group Lasso [18]. To this end, we have imitated an experiment from [13, Sec. 4.1] using the Berkeley segmentation data set222http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/. We have extracted a random dictionary of patches from these images, which we have placed on a balanced tree with branching factors (top to bottom). Here the groups correspond to all subtrees of this tree. We have then learned the decomposition of new test patches in the dictionary basis by Group Lasso regularization (3.1). As Figure 4 shows, our method and SLEP are practically indistinguishable.

5.2 Graph Prediction

The second simulation is on the graph prediction of [11] in the limit of (composite ). We constructed a synthetic graph of vertices,

with two clusters of equal size. The edges in each cluster were selected from a uniform draw with probability

and we explicitly connected pairs of vertices between the clusters. The labeled data were the cluster labels of randomly drawn vertices. Note that the effective dimensionality of this problem is . At the time of the paper’s writing there is not an accelerated method with software available online which handles a generic graph.

First, we observed that the solution found recovered perfectly the clustering. Next, we studied the decay of the objective function for different problem sizes (Figure 5). We noted a striking difference from the case of overlapping groups in that convergence now is not monotone333 There is no monotonicity guarantee for Nesterov’s accelerated method. The nature of decay also differs from graph to graph, with some cases making fast progress very close to the optimal value but long before eventual convergence. This observation suggests future modifications of the algorithm which can accelerate convergence by a factor. As an indication, the distance from the optimum was just at iteration for , respectively. We verified in this data as well, that Picard iterations converge very fast (Figure 6). Finally in Table 1 we report average iteration numbers and running times. These prove the feasibility of solving problems with large matrices even using a “quick and dirty” Matlab implementation.

Figure 5: Objective function vs. iteration for the graph data. Note the progress in the early stages in some cases.
Figure 6: difference of successive Picard iterates vs. Picard iteration for the graph data ().
no. iterations CPU time (secs.)
100 2599.6 21.461
120 3680.0 54.745
140 4351.8 118.61
160 3124.8 164.21
180 2845.8 241.69
200 3476.2 359.75
220 4490.0 911.67
240 4490.0 911.67
260 3639.2 930.8
Table 1: Graph data. Note that the effective is . CPU time can be reduced to a fraction with a C implementation.
Figure 7: Objective function vs. iteration for the Fused Lasso (). The two trajectories are identical.

In addition to a random incidence matrix, one may consider the special case of Fused Lasso or Total Variation in which has the simple form (3.2). It has been shown how to achieve the optimal rate for this problem in [3]. We applied Fused Lasso (without Lasso regularization) to the same clustering data as before and compared SLEP with the Picard-Nesterov approach. As Figure 7 shows, the two trajectories are identical. This provides even more evidence in favor of optimality of our method.

6 Conclusion

We presented an efficient first order method for solving a class of nonsmooth optimization problems, whose objective function is given by the sum of a smooth term and a nonsmooth term, which is obtained by linear function composition. The prototypical example covered by this setting in a linear regression regularization method, in which the smooth term is an error term and the nonsmooth term is a regularizer which favors certain desired parameter vectors. An important feature of our approach is that it can deal with richer classes of regularizers than current approaches and at the same time is at least as computationally efficient as specific existing approaches for structured sparsity. In particular our numerical simulations demonstrate that the proposed method matches optimal

methods on specific problems (Fused Lasso and tree structured Group Lasso) while improving over available methods for the overlapping Group Lasso. In addition, it can handle generic linear composite regularization problems, for many of which accelerated methods do not yet exist. In the future, we wish to study theoretically whether the rate of convergence is

, as suggested by our numerical simulations. There is also much room for further acceleration of the method in the more challenging cases by using practical heuristics. At the same time, it will be valuable to study further applications of our method. These could include machine learning problems ranging from multi-task learning, to multiple kernel learning and to dictionary learning, all of which can be formulated as linearly composite regularization problems.

Acknowledgements

We wish to thank Luca Baldassarre and Silvia Villa for useful discussions. This work was supported by Air Force Grant AFOSR-FA9550, EPSRC Grants EP/D071542/1 and EP/H027203/1, NSF Grant ITR-0312113, Royal Society International Joint Project Grant 2012/R2, as well as by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778.

References

  • [1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
  • [2] A. Argyriou, C.A. Micchelli, and M. Pontil. On spectral learning. The Journal of Machine Learning Research, 11:935–953, 2010.
  • [3] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. Image Processing, IEEE Transactions on, 18(11):2419–2434, 2009.
  • [4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183–202, 2009.
  • [5] S. Becker, E. J. Candès, and M. Grant. Templates for convex cone problems with applications to sparse signal recovery. Preprint, 2010.
  • [6] J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, 2005.
  • [7] P.L. Combettes and V.R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168–1200, 2006.
  • [8] I. Daubechies, R. DeVore, M. Fornasier, and C.S. Güntürk. Iteratively reweighted least squares minimization for sparse recovery. Communications on Pure and Applied Mathematics, 63(1):1–38, 2010.
  • [9] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. The Journal of Machine Learning Research, 10:2899–2934, 2009.
  • [10] T. Evgeniou, M. Pontil, and O. Toubia. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. Forthcoming at Marketing Science, 2007.
  • [11] M. Herbster and G. Lever.

    Predicting the labelling of a graph via minimum p-seminorm interpolation.

    In Proceedings of the 22nd Conference on Learning Theory (COLT), 2009.
  • [12] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. arXiv:0904.3523v2, 2009.
  • [13] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning, pages 487–494, 2010.
  • [14] D. Kim, S. Sra, and I. S. Dhillon. A scalable trust-region algorithm with application to mixed-norm regression. In International Conference on Machine Learning, 2010.
  • [15] Q. Lin. A Smoothing Stochastic Gradient Method for Composite Optimization. Arxiv preprint arXiv:1008.5204, 2010.
  • [16] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009.
  • [17] J. Liu and J. Ye. Fast Overlapping Group Lasso. Arxiv preprint arXiv:1009.0306, 2010.
  • [18] J. Liu and J. Ye. Moreau-Yosida regularization for grouped tree structure learning. In Advances in Neural Information Processing Systems, 2010.
  • [19] J. Liu, L. Yuan, and J. Ye. An efficient algorithm for a class of fused lasso problems. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323–332, 2010.
  • [20] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. CoRR, abs/1008.5209, 2010.
  • [21] C.A. Micchelli, L. Shen, and Y. Xu. Proximity algorithms for image models: denoising. preprint, September 2010.
  • [22] J.J. Moreau. Fonctions convexes duales et points proximaus dans un espace hilbertien. Acad. Sci. Paris Sér. A Math., 255:2897–2899, 1962.
  • [23] J.J. Moreau. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France, 93(2):273–299, 1965.
  • [24] S. Mosci, L. Rosasco, M. Santoro, A. Verri, and S. Villa. Solving Structured Sparsity Regularization with Proximal Methods. In Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, pages 418–433, 2010.
  • [25] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983.
  • [26] Y. Nesterov. A method of solving a convex programming problem with convergence rate . Soviet Mathematics Doklady, 27(2):372–376, 1983.
  • [27] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127–152, 2005.
  • [28] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE, 2007.
  • [29] Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex programming. Number 13. Society for Industrial Mathematics, 1987.
  • [30] Z. Opial. Weak convergence of the subsequence of successive approximations for nonexpansive operators. Bulletin American Mathematical Society, 73:591–597, 1967.
  • [31] M.R. Osborne. Finite algorithms in optimization and data analysis. John Wiley & Sons, Inc. New York, NY, USA, 1985.
  • [32] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2005.
  • [33] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Preprint, 2008.
  • [34] P. Tseng. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 125(2):263–295, 2010.
  • [35] J. Von Neumann. Some matrix-inequalities and metrization of matric-space. Mitt. Forsch.-Inst. Math. Mech. Univ. Tomsk, 1:286–299, 1937.
  • [36] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68(1):49–67, 2006.
  • [37] C. Zlinescu. Convex Analysis in General Vector Spaces. World Scientific, 2002.
  • [38] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A):3468–3497, 2009.

7 Appendix

In this appendix, we collect some basic facts about fixed point theory which are useful for our study. For more information on the material presented here, we refer the reader to [37].

Let be a closed subset of . A mapping is called strictly non-expansive (or contractive) if there exists such that, for every ,

If the above inequality holds for , the mapping is called nonexpansive. We say that is a fixed point of if . The Picard iterates starting at are defined by the recursive equation .

It is a well-knwon fact that, if is strictly nonexpansive then has a unique fixed point and . However, this result fails if is nonexpansive. For example, the map does not have a fixed point. On the other hand, the identity map has infinitely many fixed points.

Definition 7.1.

Let be a closed subset of . A map is called asymptotically regular provided that .

Proposition 7.1.

Let be a closed subset of and such that

  1. is nonexpansive;

  2. has at least one fixed point;

  3. is asymptotically regular.

Then the sequence converges to a fixed point of .

Proof.

We divide the proof in three steps.

Step 1: The Picard iterates are bounded. Indeed, let be a fixed point of . We have that

Step 2: Let be a convergent subsequence, whose limit we denote by . We will show that is a fixed point of . Since is continuous, we have that , and since is asymptotically regular .

Step 3: The whole sequence converges. Indeed, following the same reasoning in the proof of Step 1, we conclude that the sequence is non-increasing. Let . Since , we conclude that and, so, . ∎

We note that in general, without the asymptotically regularity assumption, the Picard iterates do not converge. For example, consider . Its only fixed point is ; if we start from the Picard iterates will oscillate. Moreover, if , which is nonexpansive, the Picard iterates diverge.

We now discuss the main tool which we use to find a fixed point of a nonexpansive mapping .

Theorem 7.1.

(Opial -average theorem [30]) Let be a closed convex subset of , a nonexpansive mapping, which has at least one fixed point and let . Then, for every , the Picard iterates of converge to a fixed point of .

We prepare for the proof with two useful lemmas.

Lemma 7.1.

If , , , then

Proof.

The assertion follows from strong convexity,

Lemma 7.2.

If and are sequences in such that , and , then .

Proof.

Apply Lemma 7.1 to note that

By hypothesis the right hand side tends to zero as tends to infinity and the result follows. ∎

Proof of Theorem 7.1.

Let be the iterates of . We will show that is asymptotically regular. The result will then follow by Proposition 7.1 and the fact that and have the same set of fixed points.

Let . Note that, if is fixed point of , then

Let . If the result is proved. We will show that if we contradict the hypotheses of the theorem. For every , we define and . Note that the sequences and satisfy the hypotheses of Lemma 7.2. Thus, we have that . Consequently , showing that is asymptotically regular. ∎