MIST: L0 Sparse Linear Regression with Momentum

Significant attention has been given to minimizing a penalized least squares criterion for estimating sparse solutions to large linear systems of equations. The penalty is responsible for inducing sparsity and the natural choice is the so-called l_0 norm. In this paper we develop a Momentumized Iterative Shrinkage Thresholding (MIST) algorithm for minimizing the resulting non-convex criterion and prove its convergence to a local minimizer. Simulations on large data sets show superior performance of the proposed method to other methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/05/2014

L0 Sparse Inverse Covariance Estimation

Recently, there has been focus on penalized log-likelihood covariance es...
03/18/2013

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

Non-convex sparsity-inducing penalties have recently received considerab...
08/07/2019

Linear convergence and support recovery for non-convex multi-penalty regularization

We provide a comprehensive convergence study of the iterative multi-pena...
02/24/2018

Semi-Smooth Newton Algorithm for Non-Convex Penalized Linear Regression

Both the smoothly clipped absolute deviation (SCAD) and the minimax conc...
08/08/2020

Representation Learning via Cauchy Convolutional Sparse Coding

In representation learning, Convolutional Sparse Coding (CSC) enables un...
07/29/2016

Data Filtering for Cluster Analysis by ℓ_0-Norm Regularization

A data filtering method for cluster analysis is proposed, based on minim...
11/17/2021

A Generalized Proportionate-Type Normalized Subband Adaptive Filter

We show that a new design criterion, i.e., the least squares on subband ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the current age of big data acquisition there has been an ever growing interest in sparse representations, which consists of representing, say, a noisy signal as a linear combination of very few components. This implies that the entire information in the signal can be approximately captured by a small number of components, which has huge benefits in analysis, processing and storage of high dimensional signals. As a result, sparse linear regression has been widely studied with many applications in signal and image processing, statistical inference and machine learning. Specific applications include compressed sensing, denoising, inpainting, deblurring, source separation, sparse image reconstruction, and signal classification, etc.

The linear regression model is given by:

(1)

where

is a vector of noisy data observations,

is the sparse representation (vector) of interest, is the regression matrix and is the noise. The estimation aim is to choose the simplest model, i.e., the sparsest , that adequately explains the data . To estimate a sparse , major attention has been given to minimizing a sparsity Penalized Least Squares (PLS) criterion [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. The least squares term promotes goodness-of-fit of the estimator while the penalty shrinks its coefficients to zero. Here we consider the non-convex penalty since it is the natural sparsity promoting penalty and induces maximum sparsity. The resulting non-convex PLS criterion is given by:

(2)

where is the tuning/regularization parameter and is the penalty representing the number of non-zeros in .

I-a Previous Work

Existing algorithms for directly minimizing (2) fall into the category of Iterative Shrinkage Thresholding (IST), and rely on the Majorization-Minimization (MM) type procedures, see [1, 2]. These procedures exploit separability properties of the PLS criterion, and thus, rely on the minimizers of one dimensional versions of the PLS function: the so-called hard-thresholding operators. Since the convex PLS criterion has similar separability properties, some MM procedures developed for its minimization could with modifications be applied to minimize (2). Applicable MM procedures include first order methods and their accelerated versions [9, 11, 12]. However, when these are applied to the penalized problem (2) there is no guarantee of convergence, and for [9] there is additionally no guarantee of algorithm stability.

Analysis of convergence of MM algorithms for minimizing the PLS criterion (2) is rendered difficult due to lack of convexity. As far as we are aware, algorithm convergence for this problem has only been shown for the Iterative Hard Thresholding (IHT) method [1, 2]. Specifically, a bounded sequence generated by IHT was shown to converge to the set of local minimizers of (2

) when the singular values of

are strictly less than one. Convergence analysis of algorithms designed for minimizing the PLS criterion, , is not applicable to the case of the penalized objective (2) because it relies on convex arguments when , and continuity and/or differentiability of the criterion when .

I-B Paper Contribution

In this paper we develop an MM algorithm with momentum acceleration, called Momentumized IST (MIST), for minimizing the PLS criterion (2) and prove its convergence to a single local minimizer without imposing any assumptions on . Simulations on large data sets are carried out, which show that the proposed algorithm outperforms existing methods for minimizing (2), including modified MM methods originally designed for the PLS criterion.

The paper is organised as follows. Section II reviews some of background on MM that will be used to develop the proposed convergent algorithm. The proposed algorithm is given in Section III, and Section IV contains the convergence analysis. Lastly, Section V and VI presents the simulations and concluding remarks respectively.

I-C Notation

The -th component of a vector is denoted by . Given a vector , where is a function, its -th component is denoted by . is the spectral norm of matrix . is the indicator function equaling if its argument is true, and zero otherwise. Given a vector , . is the sign function. denotes an infinite sequence, and an infinite subsequence, where for all .

Ii Preliminaries

Denoting the least squares term in (2) by:

the Lipschitz continuity of implies:

for all . For the proof see [9, Lemma 2.1]. As a result, the following approximation of the objective function in (2),

(3)

is a majorizing function, i.e.,

(4)

Let be any point in the set , we have:

(5)

where the stacking of (4) above the first inequality indicates that this inequality follows from Eq. (4). The proposed algorithm will be constructed based on the above MM framework with a momentum acceleration, described below. This momentum acceleration will be designed based on the following.

Theorem 1.

Let , where , and:

(6)

where . Then, .

For the proof see the Appendix.

Ii-a Evaluating the Operator

Since (3) is non-convex there may exist multiple minimizers of so that may not be unique. We select a single element of the set of minimizers as described below. By simple algebraic manipulations of the quadratic quantity in (3), letting:

(7)

it is easy to show that:

and so, is given by:

(8)

For the proposed algorithm we fix , the point to point map defined in the following Theorem.

Theorem 2.

Let the hard-thresholding (point-to-point) map , , be such that for each :

(9)

Then, , where is from (7).

The proof is in the Appendix.

Evidently Theorem 1 holds with replaced by . The motivation for selecting this particular minimizer is Lemma 2 in Section IV.

Iii The Algorithm

The proposed algorithm is constructed by repeated application of Theorem 1 where is chosen to be the difference between the current and the previous iterate, i.e.,

(10)

with given by (6), where . The iteration (10) is an instance of a momentum accelerated IST algorithm, similar to Fast IST Algorithm (FISTA) introduced in [9] for minimizing the convex PLS criterion. In (10), is called the momentum term and is a momentum step size parameter. A more explicit implementation of (10) is given below. Our proposed algorithm will be called Momentumized Iterative Shrinkage Thresholding (MIST).

 
Momentumized IST (MIST) Algorithm
 
Compute off-line. Choose and let . Also, calculate off-line, let and . Then:

  1. If , let . Otherwise, compute:
    (a)
    (b)
    (c)
    (d)
    (e) and:

    (11)

    (f) Choose and compute:

    (12)
  2. Using (c), (e) and (f) compute:

    (13)
  3. Let and go to (1).

 

Remark 1.

Thresholding using (9) is simple, and can always be done off-line. Secondly, note that MIST requires computing only products, which is the same order required when the momentum term is not incorporated, i.e., for all . In this case, MIST is a generalization of IHT from [1, 2]. Other momentum methods such as FISTA [9] and its monotone version M-FISTA [11] also require computing and products, respectively.

Iv Convergence Analysis

Here we prove that the proposed MIST algorithm iterates converge to a local minimizer of .

Theorem 3.

Suppose is a bounded sequence generated by the MIST algorithm. Then as , where is a local minimizer of (2).

The proof is in the Appendix and requires several lemmas that are also proved in the Appendix.

In Lemma 1 and 2 it is assumed that MIST reaches a fixed point only in the limit, i.e., for all . This implies that for all .

Lemma 1.

as .

The following lemma motivates Theorem 2 and is crucial for the subsequent convergence analysis.

Lemma 2.

Assume the result in Lemma 1. If, for any subsequence , as , then:

(14)

where .

The following lemma characterizes the fixed points of the MIST algorithm.

Lemma 3.

Suppose is a fixed point of MIST. Letting and ,

  • If , then .

  • If , then .

  • If , then .

Lemma 4.

Suppose is a fixed point of MIST. Then there exists such that for any satisfying . In other words, is a strict local minimizer of (2).

Lemma 5.

The limit points of are fixed points of MIST.

All of the above lemmas are proved in the Appendix.

V Simulations

Here we demonstrate the performance advantages of the proposed MIST algorithm in terms of convergence speed. The methods used for comparison are the well known MM algorithms: ISTA and FISTA from [9], as well as M-FISTA from [11], where the soft-thresholding map is replaced by the hard-thresholding map. In this case, ISTA becomes identical to the IHT algorithm from [1, 2], while FISTA and M-FISTA become its accelerated versions, which exploit the ideas in [13].

A popular compressed sensing scenario is considered with the aim of reconstructing a length sparse signal from observations, where . The matrix

is obtained by filling it with independent samples from the standard Gaussian distribution. A relatively high dimensional example is considered, where

and , and contains randomly placed spikes ( non-zeros). The observation is generated according to (1

) with the standard deviation of the noise

given by . The Signal to Noise Ratio (SNR) is defined by:

Figures 3, 3 and 3 show a plot of and observation noise for the three SNR values corresponding to the three considered values of .

Fig. 1: Examples of the first entry values in the (noiseless) observation and the observation noise for SNR.
Fig. 2: Examples of the first entry values in the (noiseless) observation and the observation noise for SNR.
Fig. 3: Examples of the first entry values in the (noiseless) observation and the observation noise for SNR.

V-a Selection of the Tuning Parameter

Oracle results are reported, where the chosen tuning parameter in (2) is the minimizer of the Mean Squared Error (MSE), defined by:

where is the estimator of produced by a particular algorithm.

As is generally unknown to the experimenter we also report results of using a model selection method to select . Some of the classical model selection methods include the Bayesian Information Criterion (BIC) [14], the Akaike Information criterion [15], and (generalized) cross validation [16, 17]. However, these methods tend to select a model with many spurious components when is large and is comparatively smaller, see [18, 19, 20, 21]. As a result, we use the Extended BIC (EBIC) model selection method proposed in [18], which incurs a small loss in the positive selection rate while tightly controlling the false discovery rate, a desirable property in many applications. The EBIC is defined by:

and the chosen in (2) is the minimizer of this criterion. Note that the classical BIC criterion is obtained by setting . As suggested in [18], we let , where is the solution of , i.e.,

V-B Results

All algorithms are initialized with , and are terminated when the following criterion is satisfied:

(15)

In the MIST algorithm we let and . All experiments were run in MATLAB 8.1 on an Intel Core i processor with 3.0GHz CPU and 8GB of RAM.

Figures 6, 6 and 6 show percentage reduction of as a function of time and iteration for each algorithm. To make the comparisons fair, i.e., to make sure all the algorithms minimize the same objective function, a common is used and chosen to be the smallest from the averaged obtained by each algorithm (over instances).

Fig. 4: Algorithm comparisons based on relative error where is the final value of obtained by each algorithm at its termination, i.e., , where satisfies the termination criterion in (15). Here SNR=12, and the regularization parameter has been selected using the EBIC criterion. As it can be seen, in the low noise environment the MIST algorithm outperforms the rest, both in terms of time and iteration number.
Fig. 5: Similar comparisons as in Fig. 6 except that SNR=6. As it can be seen, in the intermediate noise environment the MIST algorithm outperforms the others, both in terms of time and iteration number.
Fig. 6: Similar comparisons as in Fig. 6 except that SNR=1.7. As it can be seen, in the high noise environment the MIST algorithm outperforms the rest, both in terms of time and iteration number.

Figures 9, 9 and 9 also show percentage reduction of as a function of time and iteration for each algorithm. This time the MSE is used for the tuning of , and again, to make sure all the algorithms minimize the same objective function, a common is used and chosen to be the smallest from the averaged obtained by each algorithm (over instances).

Fig. 7: Algorithm comparisons based on relative error where is the final value of obtained by each algorithm at its termination, i.e., , where satisfies the termination criterion in (15). Here an oracle selects the regularization parameter using the minimum MSE criterion. As it can be seen, in the low noise environment the MIST algorithm outperforms the rest, both in terms of time and iteration number.
Fig. 8: Similar comparisons as in Fig. 9 except that SNR=6. As it can be seen, in the intermediate noise environment the MIST algorithm outperforms the rest, both in terms of time and iteration number.
Fig. 9: Similar comparisons as in Fig. 9 except that SNR=1.7. As it can be seen, in the high noise environment the MIST algorithm outperforms the rest, both in terms of time and iteration number.

Based on a large number of experiments we noticed that MIST, FISTA and ISTA outperformed M-FISTA in terms of run time. This could be due to the fact that M-FISTA requires computing a larger number of products, see Remark 1, and the fact that it is a monotone version of a severely non-monotone FISTA. The high non-monotonicity could possibly be due to non-convexity of the objective function .

Lastly, Figures 12, 12 and 12 show the average speed of each algorithm as a function of .

Fig. 10: Average time vs. over instances when SNR=12. For the comparison, equally spaced values of are considered, where . As it can be seen, except in the scenario when the MIST algorithm outperforms the others.
Fig. 11: Similar comparisons as in Fig. 12 except that SNR=6. As it can be seen, except in the scenario when the MIST algorithm outperforms the others.
Fig. 12: Similar comparisons as in Fig. 12 except that SNR=1.7. As it can be seen, except in the scenario when the MIST algorithm outperforms the others.

Vi Conclusion

We have developed a momentum accelerated MM algorithm, MIST, for minimizing the penalized least squares criterion for linear regression problems. We have proved that MIST converges to a local minimizer without imposing any assumptions on the regression matrix . Simulations on large data sets were carried out for different SNR values, and have shown that the MIST algorithm outperforms other popular MM algorithms in terms of time and iteration number.

Vii Appendix

Proof of Theorem 1: Let . The quantities , and are quadratic functions, and by simple linear algebra they can easily be expanded in terms of , and . Namely,

(16)

Also,

(17)

and finally:

(18)

Using the above expansions and the definition of , we have that:

(19)

where:

Observing that , let:

(20)

Then, one has:

(21)
(22)

which holds for any . So, letting implies:

which completes the proof. ∎

Proof of Theorem 2: Looking at (8) it is obvious that:

If , by [22, Theorem 1] is unique and given by . If , again by [22, Theorem 1] we now have:

Hence, , completing the proof. ∎

Proof of Lemma 1: From Theorem 1, , so the sequence is bounded, which means it has a finite limit, say, . As a result:

(23)

Next, recall that and . So, using (21) in the proof of Theorem 1 (where ) with , , , , and respectively replaced by , , , and , we have:

(24)

where . The first term in (24) follows from the fact that:

Now, noting that:

(25)

which easily follows from basic linear algebra, (24) and (25) together imply that:

(26)

where

is the smallest eigenvalue of

. So, both terms on the right hand side in (26) are for all . As a result, due to (23) we can use the pinching/squeeze argument on (26) to establish that and as . Consequently, as , which completes the proof. ∎

Proof of Lemma 2: Firstly, from Lemma 1 we have that as . In the last paragraph in the proof of that lemma (above) we also have that , and so, as . As a result, by the definition of , . Then, by the continuity of we have that:

Now, we need to show that, if as , then:

(27)

Consider an arbitrary component of , say, , in which case we must have . Without loss of generality, assume . Then, by the definition of , there are two scenarios to consider:

Regarding (a): For a large enough , we must either have or for all , which implies:

(28)

for all . In (28), is a continuous function of in both cases, which immediately implies (27).

Regarding (b): In general, could be reached in an oscillating fashion, i.e., for some we can have and for others . If this was the case for all then would approach a limit set of two points . However, having and implies . So, using the fact that:

(29)

must approach either or . In other words, there has to exist a large enough such that is approached either only from the left or the right for all , i.e.,

  • if for all , from (28) we have . So, noting that:

    (30)

    implies . As a result, , and using the definition of , we have:

    Hence, (27) is satisfied.

  • if for all , from (28) we have . So, (30) implies , and using the definition of , we have:

    Hence, (27) is again satisfied.

Since is arbitrary, the proof is complete. ∎

Proof of Lemma 3: The fixed points are obviously obtained by setting . So, any fixed point satisfies the equation:

(31)

The result is established by using the definition of in Theorem 2. Namely, if we easily obtain:

which reduces to C. If , we easily obtain:

(32)

which reduces to C. However, since:

(33)

(32) and (33) together imply , giving C. ∎

Proof of Lemma 4: Letting and , it can easily be shown that , where:

Now, , so suppose , . Then:

The second inequality in the above is due to C in Lemma 3.

Lastly, note that , and suppose . From C in Lemma 3 we have . Thus, supposing , from C we have for all . Thus:

and so, . Since , the proof is complete after letting . ∎

Proof of Lemma 5: Since it is assumed that is bounded, the sequence is also bounded, and thus, has at least one limit point. Denoting one of these by