 # Sparsity Based Methods for Overparameterized Variational Problems

Two complementary approaches have been extensively used in signal and image processing leading to novel results, the sparse representation methodology and the variational strategy. Recently, a new sparsity based model has been proposed, the cosparse analysis framework, which may potentially help in bridging sparse approximation based methods to the traditional total-variation minimization. Based on this, we introduce a sparsity based framework for solving overparameterized variational problems. The latter has been used to improve the estimation of optical flow and also for general denoising of signals and images. However, the recovery of the space varying parameters involved was not adequately addressed by traditional variational methods. We first demonstrate the efficiency of the new framework for one dimensional signals in recovering a piecewise linear and polynomial function. Then, we illustrate how the new technique can be used for denoising and segmentation of images.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Many successful signal and image processing techniques rely on the fact that the given signals or images of interest belong to a class described by a certain a priori known model. Given the model, the signal is processed by estimating the “correct” parameters of the model. For example, in the sparsity framework the assumption is that the signals belong to a union of low dimensional subspaces [1, 2, 3, 4]. In the variational strategy, a model is imposed on the variations of the signal, e.g., its derivatives are required to be smooth [5, 6, 7, 8].

Though both sparsity-based and variational-based approaches are widely used for signal processing and computer vision, they are often viewed as two different methods with little in common between them. One of the well known variatonal tools is the total-variation regularization, used mainly for denoising and inverse problems. It can be formulated as



 (1)

where are the given noisy measurements, is a measurement matrix, is an additive (typically white Gaussian) noise, is a regularization parameter, is the original unknown signal to be recovered, and

The anisotropic version of (1), which we will use in this work, is

 (2)

where is the finite-difference operator that returns the derivatives of the signal. In the D case it applies the filter , i.e.,

 Ω\tiny{DIF}=Ω% \tiny{1D-DIF}=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1−10……001−1……0⋮⋱⋱⋱⋱⋮00⋱1−1000……1−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (3)

For images it returns the horizontal and vertical derivatives using the filters and respectively. Note that for one dimensional signals there is no difference between (1) and (2) as the gradient equals the derivative. However, in the 2D case the first (Eq. (1)) considers the sum of gradients (square root of the squared sum of the directional derivatives), while the second (Eq. (2)) considers the absolute sum of the directional derivatives, approximated by finite differences.

Recently, a very interesting connection has been drawn between the total-variation minimization problem and the sparsity model. It has been shown that (2) can be viewed as an -relaxation technique for approximating signals that are sparse in their derivatives domain, i.e., after applying the operator on them [9, 10, 11]. Such signals are said to be cosparse under the operator in the analysis (co)sparsity model .

Notice that the TV regularization is only one example from the variational framework. Another recent technique, which is the focus of this paper, is the overparameterization idea, which represents the signal as a combination of known functions weighted by space-variant parameters of the model [12, 13].

Let us introduce this overparameterized model via an example. If a 1D signal is known to be piecewise linear, its -th element can be written as , where and are the local coefficients describing the local line-curve. As such, the vectors and should be piecewise constant, with discontinuities in the same locations. Each constant interval in and corresponds to one linear segment in . When put in matrix-vector notation, can be written alternatively as

 f=a+Zb, (4)

where is a diagonal matrix with the values on its main diagonal. For images this parameterization would similarly be .

This strategy is referred to as ”overparameterization” because the number of representation parameters is larger than the signal size. In the above 1D example, while the original signal contains unknown values, the recovery problem that seeks and has twice as many variables. Clearly, there are many other parameterization options for signals, beyond the linear one. Such parameterizations have been shown to improve the denoising performance of the solution of the problem posed in (1) in some cases , and to provide very high quality results for optical flow estimation [13, 14].

### I-a Our Contribution

The true force behind overparameterization is that while it uses more variables than needed for representing the signals, these are often more naturally suited to describe its structure. For example, if a signal is piecewise linear then we may impose a constraint on the overparameterization coefficients and to be piecewise constant.

Note that piecewise constant signals are sparse under the operator. Therefore, for each of the coefficients we can use the tools developed in the analysis sparsity model [11, 15, 16, 17, 18]. However, in our case and are jointly sparse, i.e., their change points are collocated and therefore an extension is necessary.

Constraints on the structure in the sparsity pattern of a representation have already been analyzed in the literature. They are commonly referred to as joint sparsity models, and those are found in the literature, both in the context of handling groups of signals [19, 20, 21, 22, 23, 24], or when considering blocks of non-zeros in a single representation vector [25, 26, 27, 28, 29]. We use these tools to extend the existing analysis techniques to handle the block sparsity in our overparameterized scheme.

In this paper we introduce a general sparsity based framework for solving overparameterized variational problems. As the structure of these problems enables segmentation while recovering the signal, we provide an elegant way for recovering a signal from its deteriorated measurements by using an approach, which is accompanied by theoretical guarantees. We demonstrate the efficiency of the new framework for one dimensional functions in recovering piecewise polynomial signals. Then we shift our view to images and demonstrate how the new approach can be used for denoising and segmentation.

### I-B Organization

This paper is organized as follows: In Section II we present the overparameterized variational model with more details. In Section III we describe briefly the synthesis and analysis sparsity models. In Sections IV and V we introduce a new framework for solving overparameterized variational problems using sparsity. Section IV proposes a recovery strategy for the 1D polynomial case based on the SSCoSaMP technique with optimal projections [30, 31, 32]. We provide stable recovery guarantees for this algorithm for the case of an additive adversarial noise and denoising guarantees for the case of a zero-mean white Gaussian noise. In Section V we extend our scheme beyond the 1D case to higher dimensional polynomial functions such as images. We employ an extension of the GAPN algorithm  for block sparsity for this task. In Section VI we present experiments for linear overparameterization of images and one dimensional signals. We demonstrate how the proposed method can be used for image denoising and segmentation. Section VII concludes our work and proposes future directions of research.

## Ii The Overparameterized Variational Framework

Considering again the linear relationship between the measurements and the unknown signal,

 g=Mf+e, (5)

note that without a prior knowledge on we cannot recover it from if or .

In the variational framework, a regularization is imposed on the variations of the signal . One popular strategy for recovering the signal in this framework is by solving the following minimization problem:

 (6)

where is the regularization weight and is the type of norm used with the regularization operator , which is typically a local operator. For example, for and we get the TV minimization (Eq. (2)). Another example for a regularization operator is the Laplace operator . Other types of regularization operators and variational formulations can be found in [34, 35, 36, 37].

Recently, the overameterized variational framework has been introduced as an extension to the traditional variational methodology [12, 13, 14, 38]. Instead of applying a regularization on the signal itself, it is applied on the coefficients of the signal under a global parameterization of the space. Each element of the signal can be modeled as , where are the coefficients vectors and contain the parameterization basis functions for the space.

Denoting by the diagonal matrix that has the vector on its diagonal, we can rewrite the above as . With these notations, the overparameterized minimization problem becomes

 min~bi,1≤i≤n∥∥ ∥∥g−Mn∑i=1Xi~bi∥∥ ∥∥22+n∑i=1λi∥∥Ai(~bi)∥∥pi, (7)

where each coefficient is regularized separately by the operator (which can be the same one for all the coefficients)111We note that it is possible to have more than one regularization for each coefficient, as practiced in ..

Returning to the example of a linear overparameterization, we have that , where in this case

(the identity matrix) and

, a diagonal matrix with on its diagonal. If is a piecewise linear function then the coefficients vectors and should be piecewise constant, and therefore it would be natural to regularize these coefficients with the gradient operator. This leads to the following minimization problem:

 (8)

which is a special case of (7). The two main advantages of using the overparameterized formulation are these: (i) the new unknowns have a simpler form (e.g. a piecewise linear signal is treated by piecewise constant unknowns), and thus are easier to recover; and (ii) this formulation leads to recovering the parameters of the signal along with the signal itself.

The overparametrization idea, as introduced in [12, 13, 14, 38] builds upon the vast work in signal processing that refers to variational methods. As such, there are no known guarantees for the quality of the recovery of the signal, when using the formulation posed in (8) or its variants. Moreover, it has been shown in  that even for the case of (and obviously, ), a poor recovery is achieved in recovering and its parameterization coefficients. Note that the same happens even if more sophisticated regularizations are combined and applied on , , and eventually on .

This leads us to look for another strategy to approach the problem of recovering a piecewise linear function from its deteriorated measurement . Before describing our new scheme, we introduce in the next section the sparsity model that will aid us in developing this alternative strategy.

## Iii The Synthesis and Analysis Sparsity Models

A popular prior for recovering a signal from its distorted measurements (as posed in (5)) is the sparsity model [1, 2]. The idea behind it is that if we know a priori that resides in a union of low dimensional subspaces, which do not intersect trivially with the null space of , then we can estimate stably by selecting the signal that belongs to this union of subspaces and is the closest to [3, 4].

In the classical sparsity model, the signal is assumed to have a sparse representation under a given dictionary , i.e., , where is the pseudo-norm that counts the number of non-zero entries in a vector, and is the sparsity of the signal. Note that each low dimensional subspace in the standard sparsity model, known also as the synthesis model, is spanned by a collection of columns from . With this model we can recover by solving

 minα∥g−MDα∥22 s.t. ∥α∥0≤k, (9)

if is known, or

 minα∥α∥0 s.t. ∥g−MDα∥22≤∥e∥22, (10)

if we have information about the energy of the noise . Obviously, once we get , the desired recovered signal is simply . As both of these minimization problems are NP-hard , many approximation techniques have been proposed to approximate their solution, accompanied with recovery guarantees that depend on the properties of the matrices and . These include -relaxation [40, 41, 42], known also as LASSO , matching pursuit (MP) , orthogonal matching pursuit (OMP) [45, 46], compressive sampling matching pursuit (CoSaMP) , subspace pursuit (SP) , iterative hard thresholding (IHT)  and hard thresholding pursuit (HTP) .

Another framework for modeling a union of low dimensional subspaces is the analysis one [10, 40]. This model considers the behavior of , the signal after applying a given operator on it, and assumes that this vector is sparse. Note that here the zeros are those that characterize the subspace in which resides, as each zero in corresponds to a row in to which is orthogonal to. Therefore, resides in a subspace orthogonal to the one spanned by these rows. We say that is cosparse under with a cosupport if , where is a sub-matrix of with the rows corresponding to the set .

The analysis variants of (9) and (10) for estimating are

 s.t. ∥∥Ω~f∥∥0≤k, (11)

where is the number of non-zeros in , and

 min~f∥∥Ω~f∥∥0 s.t. ∥∥g−M~f∥∥22≤∥e∥22. (12)

As in the synthesis case, these minimization problems are also NP-hard  and approximation techniques have been proposed including Greedy Analysis Pursuit (GAP) , GAP noise (GAPN) , analysis CoSAMP (ACoSaMP), analysis SP (ASP), analysis IHT (AIHT) and analysis HTP (AHTP) .

## Iv Overparameterization via the Analysis Sparsity Model

With the sparsity models now defined, we revisit the overparameterization variational problem. If we know that our signal is piecewise linear, then it is clear that the coefficients parameters should be piecewise constant with the same discontinuity locations, when linear overparameterization is used. We denote by the number of these discontinuity locations.

As a reminder we rewrite . Note that and are jointly sparse under , i.e, and have the same non-zero locations. With this observation we can extend the analysis minimization problem (11) to support the structured sparsity in the vector , leading to the following minimization problem:

 mina,b∥∥∥g−M[I,Z][ab]∥∥∥22 (13) s.t.  ∥∥∣∣Ω\tiny{DIF}a∣∣+∣∣Ω\tiny{DIF}b∣∣∥∥0≤k,

where denotes applying element-wise absolute value on the entries of .

Note that we can have a similar formulation for this problem also in the synthesis framework using the Heaviside dictionary

 DHS=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣11…1101…⋱1⋮0⋱⋱⋮⋮⋱⋱1100…01⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (14)

whose atoms are step functions of different length. We use the known observation that every one dimensional signal with change points can be sparsely represented using atoms from ( columns for representing the change points plus one for the DC). One way to observe that is by the fact that , where is a submatrix of obtained by removing the last column of (the DC component). Therefore, one may recover the coefficient parameters and , by their sparse representations and , solving

 minα,β∥∥∥g−M[I,Z][DHS00DHS][αβ]∥∥∥22 (15) s.t.  ∥|α|+|β|∥0≤k,

where and . This minimization problem can be approximated using block-sparsity techniques such as the group-LASSO estimator , the mixed- relaxation (extension of the relaxation) [26, 27], the Block OMP (BOMP) algorithm  or the extensions of CoSaMP and IHT for structured sparsity . The joint sparsity framework can also be used with (15) [19, 20, 21, 22, 23, 24].

The problem with the above synthesis techniques is twofold: (i) No recovery guarantees exist for this formulation with the dictionary ; (ii) It is hard to generalize the model in (9) to higher order signals, e.g., images.

The reason that no theoretical guarantees are provided for the dictionary is the high correlation between its columns. These create high ambiguity, causing the classical synthesis techniques to fail in recovering the representations and . This problem has been addressed in several contributions that have treated the signal directly and not its representation [30, 31, 32, 51, 52, 53].

We introduce an algorithm that approximates the solutions of both (9) and (11) and has theoretical reconstruction performance guarantees for one dimensional functions with matrices that are near isometric for piecewise polynomial functions. In the next section we shall present another algorithm that does not have such guarantees but is generalizable to higher order functions.

Though till now we have restricted our discussion only to piecewise linear functions, we turn now to look at the more general case of piecewise 1D polynomial functions of degree . Note that this method approximates the following minimization problem, which is a generalization of (13) to any polynomial of degree ,

 (16) s.t.  ∥∥ ∥∥n∑i=0∣∣Ω\tiny{DIF}bi∣∣∥∥ ∥∥0≤k,

where is an element-wise operation that calculates the absolute value of each entry in .

We employ the signal space CoSaMP (SSCoSaMP) strategy [30, 31]222In a very similar way we could have used the analysis CoSaMP (ACoSaMP) [11, 16]. to approximate the solution of (16). This algorithm assumes the existence of a projection that for a given signal finds its closest signal (in the -norm sense) that belongs to the model333Note that in [30, 31] the projection might be allowed to be near-optimal in the sense that the projection error is close to the optimal error up to a multiplicative constant., where in our case the model is piecewise polynomial functions with jump points. This algorithm, along with the projection required, are presented in Appendix A.

### Iv-a Recovery Guarantees for Piecewise Polynomial Functions

To provide theoretical guarantees for the recovery by SSCoSaMP we employ two theorems from  and . These lead to reconstruction error bounds for SSCoSaMP that guarantee stable recovery if the noise is adversarial, and an effective denoising effect if it is zero-mean white Gaussian.

Both theorems rely on the following property of the measurement matrix , which is a special case of the -RIP  and -RIP .

###### Definition IV.1

A matrix has a polynomial restricted isometry property of order (-RIP) with a constant if for any piecewise polynomial function of order with jumps we have

 (1−δk)∥f∥22≤∥Mf∥22≤(1+δk)∥f∥22. (17)

Having the -RIP definition we turn to present the first theorem, which treats the adversarial noise case.

###### Theorem IV.2 (Based on Corollary 3.2 in 444Corollary 3.2 in  provides stable recovery guarantees for general sparse vectors under a given dictionary D with the assumption that there exists a near-optimal projection algorithm that projects any vector to its closest sparse vector under the same dictionary. We can apply the result of Corollary 3.2 due to the optimal projection algorithm proposed in Section A-A.)

Let be a piecewise polynomial function of order , be an adversarial bounded noise and satisfy the -RIP (17) with a constant . Then after a finite number of iterations, SSCoSaMP yields

 ∥∥^f−f∥∥2≤C∥e∥2, (18)

where is a constant depending on .

Note that the above theorem implies that we may compressively sense piecewise polynomial functions and achieve a perfect recovery in the noiseless case . Note also that if

is a subgaussian random matrix then it is sufficient to use only

measurements [3, 11].

Though the above theorem is important for compressed sensing, it does not guarantee noise reduction, even for the case , as

. The reason for this is that the noise here is adversarial, leading to a worst-case bound. By introducing a random distribution for the noise, one may get better reconstruction guarantees. The following theorem assumes that the noise is randomly Gaussian distributed,

this way enabling to provide effective denoising guarantees.

###### Theorem IV.3 (Based on Theorem 1.7 in 555Theorem 1.7 in  provides near-oracle performance guarantees for block-sparse vectors under a given dictionary D with the assumption that there exists a near-optimal projection algorithm that projects any vector to its closest sparse vector under the same dictionary. We can apply the result of Theorem 1.7 due to the optimal projection algorithm proposed in Section A-A.)

Assume the conditions of Theorem IV.2 such that

is a random zero-mean white Gaussian noise with a variance

. Then after a finite number of iterations, SSCoSaMP yields

 ∥∥^f−f∥∥2≤ (19) C√(1+δ3k)3k(1+√2(1+β)log(nd))σ,

with probability exceeding

.

The bound in the theorem can be given on the expected error instead of being given only with high probability using the proof technique in . We remark that if we were given an oracle that foreknows the locations of the jumps in the parameterization, the error we would get would be . As the factor in our bound is inevitable , we may conclude that our guarantee is optimal up to a constant factor. Fig. 3: MSE of the recovered piecewise linear functions (left) and piecewise second-order polynomial functions (right) as a function of the noise variance σ for the methods BGAPN with and without the continuity constraint and the optimal approximation with and without continuity post-processing. As a reference we compare to the non local overparameterized TV (TVOPNL) approach introduced in . Fig. 4: Recovery rate of piecewise second-order polynomial functions as a function of the sampling rate m/d for the methods BGAPN and SSCoSaMP.

## V Sparsity based Overparameterized Variational Algorithm for High Dimensional Functions

We now turn to generalize the model in (13) to support other overparameterization forms, including higher dimensional functions such as images. We consider the case where an upper-bound for the noise energy is given and not the sparsity , as is common in many applications. Notice that for the synthesis model, such a generalization is not trivial because while it is easy to extend the operator to high dimensions, it is not clear how to do this for the Heaviside dictionary.

Therefore we consider an overparameterized version of (12), where the noise energy is known and the analysis model is used. Let be matrices of the space variables and their coefficients parameters. For example, in a 2D (image) case of piecewise linear constant, will be the identity matrix, will be a diagonal matrix with the values on its main diagonal, and will similarly be a diagonal matrix with on its main diagonal. Assuming that all the coefficient parameters are jointly sparse under a general operator , we may recover these coefficients by solving

 [^bT1,…,^bTn]T=min~b1,…,~bn∥∥ ∥∥n∑i=1∣∣Ω~bi∣∣∥∥ ∥∥0 (20) s.t.  ∥∥ ∥ ∥∥g−M[X1,…Xn]⎡⎢ ⎢⎣~b1⋮~bn⎤⎥ ⎥⎦∥∥ ∥ ∥∥2≤∥e∥2.

Having an estimate for all these coefficients, our approximation for the original signal is .

As the minimization problem in (20) is NP-hard we suggest to solve it by a generalization of the GAPN algorithm  – the block GAPN (BGAPN). We introduce this extension in Appendix B.

This algorithm aims at finding in a greedy way the rows of that are orthogonal to the space variables . Notice that once we find the indices of these rows, the set that satisfies for ( is the submatrix of with the rows corresponding to the set ), we may approximate by solving

 [^bT1,…,^bTn]T=min~b1,…,~bnn∑i=1∥∥ΩΛ~bi∥∥22 (21) s.t.  ∥∥ ∥ ∥∥g−M[X1,…Xn]⎡⎢ ⎢⎣~b1⋮~bn⎤⎥ ⎥⎦∥∥ ∥ ∥∥2≤∥e∥2.

Therefore, BGAPN approximates by finding first. It starts with that includes all the rows of and then gradually removes elements from it by solving the problem posed in (21) at each iteration and then finding the row in that has the largest correlation with the current temporal solution .

Note that there are no known recovery guarantees for BGAPN of the form we have had for SSCoSaMP before. Therefore, we present its efficiency in several experiments in the next section. As explained in Appendix B, the advantages of BGAPN over SSCoSaMP, despite the lack of theoretical guarantees, are that (i) it does not need to be foreknown and (ii) it is easier to use with higher dimensional functions.

Before we move to the next section we note that one of the advantages of the above formulation and the BGAPN algorithm is the relative ease of adding to it new constraints. For example, we may encounter piecewise polynomial functions that are also continuous. However, we do not have such a continuity constraint in the current formulation. As we shall see in the next section, the absence of such a constraint allows jumps in the discontinuity points between the polynomial segments and therefore it is important to add it to the algorithm to get a better reconstruction.

One possibility to solve this problem is to add a continuity constraint on the jump points of the signal. In Appendix B we present also a modified version of the BGAPN algorithm that imposes such a continuity constraint, and in the next section we shall see how this handles the problem. Note that this is only one example of a constraint that one may add to the BGAPN technique. For example, in images one may add a smoothness constraint on the edges’ directions.

## Vi Experiments

For demonstrating the efficiency of the proposed method we perform several tests. We start with the one dimensional case, testing our polynomial fitting approach with the continuity constraint and without it for continuous piecewise polynomials of first and second degrees. We compare these results with the optimal polynomial approximation scheme presented in Section IV and to the variational approach in . We continue with a compressed sensing experiment for discontinuous piecewise polynomials and compare BGAPN with SSCoSaMP. Then we perform some tests on images using BGAPN. We start by denoising cartoon images using the piecewise linear model. We compare our outcome with the one of TV denoising  and show that our result does not suffer from a staircasing effect . We compare also to a TV denoising version with overparameterization . Then we show how our framework may be used for image segmentation, drawing a connection to the Mumford-Shah functional [57, 58]. We compare our results with the ones obtained by the popular graph-cuts based segmentation .

### Vi-a Continuous Piecewise Polynomial Functions Denoising

In order to check the performance of the polynomial fitting, we generate random continuous piecewise-linear and second-order polynomial functions with samples, jumps and a dynamic range

. Then we contaminate the signal with a white Gaussian noise with a standard deviation from the set

.

We compare the recovery result of BGAPN with and without the continuity constraint with the one of the optimal approximation666We have done the same experiment with the BOMP algorithm , adopting the synthesis framework, with and without the continuity constraint, and observed that it performs very similarly to BGAPN.. Figs. 1 and 2 present BGAPN reconstruction results for the linear and second order polynomial cases respectively, for two different noise levels. It can be observed that the addition of the continuity constraint is essential for the correctness of the recovery. Indeed, without it we get jumps between the segments. Note also that the number of jumps in our recovery may be different than the one of the original signal as BGAPN does not have a preliminary information about it. However, it still manages to recover the parameterization in a good way, especially in the lower noise case.

The possibility to provide a parametric representation is one of the advantages of our method. Indeed, one may achieve good denoising results without using the linear model in terms of mean squared error (MSE) using methods such as free-knot spline . However, the approximated function is not guaranteed to be piecewise linear and therefore learning the change points from it is sub-optimal. See  and the references therein for more details.

To evaluate our method with respect to its MSE we compare it with the optimal approximation for piecewise polynomial function presented in Appendix A-A. Note that the target signals are continuous while this algorithm does not use this assumption. Therefore, we add the continuity constraint to this method as a post processing (unlike BGAPN that merges this in its steps). We take the changing points it has recovered and project the noisy measurement to its closest continuous piecewise polynomial function with the same discontinuities.

Figure 3 presents the recovery performance of BGAPN and the projection algorithm with and without the continuous constraint. Without the constraint, it can be observed that BGAPN achieves better recovery performance. This is due to the fact that it is not restricted to the number of change points in the initial signal and therefore it can use more points and thus adapt itself better to the signal, achieving lower MSE. However, after adding the constraint in the piecewise linear case the optimal projection achieves a better recovery error. The reason is that, as the optimal projection uses the exact number of points, it finds the changing locations more accurately. Note though that in the case of second order polynomial functions, BGAPN gets better recovery. This happens because this program uses the continuity constraint also within its iterations and not only at the final step, as is the case with the projection algorithm. As the second order polynomial case is more complex than the piecewise linear one, the impact of the usage of the continuity prior is higher and more significant than the information on the number of change points.

We compare also to the non-local opverapameterized TV algorithm (TVOPNL) in 777Code provided by the authors., which was shown to be better for the task of line segmentation, when compared with several alternatives including the ones reported in  and . Clearly, our proposed scheme achieves better recovery performance than TVOPNL, demonstrating the supremacy of our line segmentation strategy.

### Vi-B Compressed Sensing of Piecewise Polynomial Functions

We perform also a compressed sensing experiment in which we compare the performance of SSCoSAMP, with the optimal projection, and BGAPN for recovering a second order polynomial function with jumps from a small set of linear measurements. Each entry in the measurement matrix

is selected from an i.i.d normal distribution and then all columns are normalized to have a unit norm. The polynomial functions are selected as in the previous experiment but with two differences: (i) we omit the continuity constraint; and (ii) we normalize the signals to be with a unit norm.

Fig. 4 presents the recovery rate (noiseless case ) of each program as a function of the number of measurements . Note that for a very small or large number of samples BGAPN behaves better. However, in the middle range SSCoSaMP achieves a better reconstruction rate. Nonetheless, we may say that their performance is more or less the same.

### Vi-C Cartoon Image Denoising

We turn to evaluate the performance of our approach on images. A piecewise smooth model is considered to be a good model for images, and especially to the ones with no texture, i.e., cartoon images [61, 62]. Therefore, we use a linear overparameterization of the two dimensional plane and employ the two dimensional difference operator that calculates the horizontal and vertical discrete derivatives of an image by applying the filters and on it. In this case, the problem in (20) turns to be (notice that )

 [^bT0,^bTh,^bTv]T= (22) s.t.  ∥∥ ∥ ∥∥g−[X0,Xh,Xv]⎡⎢ ⎢⎣~b0~bh~bv⎤⎥ ⎥⎦∥∥ ∥ ∥∥2≤∥e∥2,

where , are the matrices that contain the DC, the horizontal and the vertical parameterizations respectively; and are their corresponding space variables.

We apply this scheme for denoising two cartoon images, swoosh and sign. We compare our results with the ones of TV denoising . Figs. 5 and 6 present the recovery of swoosh and sign from their noisy version contaminated with an additive white Gaussian noise with . Note that we achieve better recovery results than TV and do not suffer from its staircasing effect. We have tuned the parameters of TV separately for each image to optimize its output quality, while we have used the same setup for our method in all the denoising experiments. To get a good quality with BGAPN, we run the algorithm several times with different set of parameters (which are the same for all images) and then provide as an output the average image of all the runs. Notice that using this technique with TV degrades its results.

To test whether our better denoising is just a result of using overparameterization or an outcome of our new framework, we compare also to TV with linear overparameterization 888Code provided by the authors.. Notice that while plugging overparameterization directly in TV improves the results in some cases , this is not the case with the images here. Therefore, we see that our new framework that links sparsity with overparameterization has an advantage over the old approach that still acts within the variational scheme.

We could use other forms of overparameterizations such as cubical instead of planar or add other directions of the derivatives in addition to the horizontal and vertical ones. For example, one may apply our scheme also using an operator that calculates also the diagonal derivatives using the filters and . Such choices may lead to an improvement in different scenarios. A future work should focus on learning the overparameterizations and the type of derivatives that should be used for denoising and for other tasks. We believe that such a learning has the potential to lead to state-of-the-art results.

### Vi-D Image Segmentation

As a motivation for the task of segmentation we present the denoising of an image with a texture. We continue using the model in (22) and consider the house image as an example. Fig. 7 demonstrates the denoising result we get for this image. Note that here as well we do not suffer from the staircasing effect that appears in the TV recovery. However, due to the nature of our model we loose the texture and therefore achieve an inferior PSNR compared to the TV denoising999The lower PSNR we get with our method is because our model is linear and therefore is less capable to adapt itself to the texture. By using a cubic overparameterization we get PSNR which is equal to the one of TV. Note also that for larger noise magnitudes the recovery performance of our algorithm in terms of PSNR becomes better than TV also with the linear model, as in these conditions, we tend to loose the texture anyway..

Though the removal of texture is not favorable for the task of denoising, it makes the recovery of salient edges in the original image easier. In Fig. 8 we present the gradient map of our recovered image and the one of the original image. It can be seen that while the gradients of the original image capture also the texture changes, our method finds only the main edges101010We have tested our approach also in the presence of a blur operator. The edges in this case are preserved as well.. This motivates us to use our scheme for segmentation.

Since our scheme divides the image into piecewise linear regions, we can view our strategy as an approach that minimizes the Mumford-Shah functional [57, 58]. On the other hand, if the image has only two regions, our segmentation result can be viewed as a solution of the Chan-Vese functional with the difference that we model each region by a polynomial function instead of approximating it by a constant .

We present our segmentation results for three images, and for each we display the piecewise constant version of each image together with its boundary map. Our segmentation results appear in Figs. 9, 10 and 11. We compare our results to the popular graph-cuts based segmentation . Notice that we achieve a comparable performance, where in some places our method behaves better and in others the strategy in  provides a better result.

Though we get a good segmentation, it is clear that there is still a large room for improvement compared to the current state-of-the-art. One direction for improvement is to use more filters within . Another one is to calculate the gradients of the coefficients parameters and not of the recovered image as they are supposed to be truly piecewise constant. We leave these ideas to a future work.

## Vii Conclusion and Future Work

This work has presented a novel framework for solving the overparameterized variational problem using sparse representations. We have demonstrated how this framework can be used both for one dimensional and two dimensional functions, while a generalization to other higher dimensions (such as 3D) is straightforward. We have solved the problem of line fitting for piecewise polynomial 1D signals and then shown how the new technique can be used for compressed sensing, denoising and segmentation.

Though this work has focused mainly on linear overparameterizations, the extension to other forms is straightforward. However, to keep the discussion as simple as possible, we have chosen to use simple forms of overparameterizations in the experiments section. As a future research, we believe that a learning process should be added to our scheme. It should adapt the functions of the space variables and the filters in to the signal at hand. We believe that this has the potential to lead to state-of-the-art results in segmentation, denoising and other signal processing tasks. Combining of our scheme with the standard sparse representation approach may provide the possibility to add support to images with texture. This will lead to a scheme that works globally on the image for the cartoon part and locally for the texture part. Another route for future work is to integrate our scheme in the state-of-the-art overparameterized based algorithm for optical flow in .

## Acknowledgment

The authors would like to thank Tal Nir and Guy Roseman for providing their code for the experiments. The research leading to these results has received funding from the European Research Council under European Unions Seventh Framework Program, ERC Grant agreement no. 320649. This research is partially supported by AFOSR, ARO, NSF, ONR, and NGA. The authors would like to thank the anonymous reviewers for their helpful and constructive comments that greatly contributed to improving this paper.

## Appendix A The SSCoSaMP Algorithm

For approximating (16), we use a block sparsity variant of SSCoSaMP  and adapt it to our model. It is presented in Algorithm 1. Due to the equivalence between and , we use the latter in the algorithm.

This method uses a projection that given a signal finds its closest piecewise polynomial functions with jump points. We calculate this projection using dynamic programming. Our strategy is a generalization of the one that appears in [11, 64] and is presented in the next subsection.

The halting criterion we use in our work in Algorithm 1 is for a given small constant . Other options for stopping criteria are discussed in .

### A-a Optimal Approximation using Piecewise Polynomial Functions

Our projection technique uses the fact that once the jump points are set, the optimal parameters of the polynomial in a segment can be calculated optimally by solving a least squares minimization problem

 ∥∥ ∥ ∥ ∥ ∥∥[I[t,l],Z[t,l],…,Zn[t,l]]⎡⎢ ⎢ ⎢ ⎢ ⎢⎣b0[t,l]b1[t,l]⋮bn[t,l]⎤⎥ ⎥ ⎥ ⎥ ⎥⎦−g[t,l]∥∥ ∥ ∥ ∥ ∥∥22, (23)

where is the sub-vector of supported by the indices to () and is the (square) sub-matrix of corresponding to the indices to . We denote by the polynomial function we get by solving (23). Indeed, in the case that the size of the segment is smaller than the number of parameters, e.g. segment of size one for a linear function, the above minimization problem has infinitely many options for setting the parameters. However, all of them lead to the same result, which is keeping the values of the points in the segment, i.e., having .

Denote by the optimal approximation of the signal by a piecewise polynomial function with jumps. It can be calculated by solving the following recursive minimization problem

 ^t=\operatornamewithlimitsargmin1≤t<~d ∥Sn(g[1,t],k−1)−g[1,t]∥22 +∥∥Pn(g[t+1,~d])−g[t+1,~d]∥∥22,

and setting

 Sn(g[1,~d],k)=[Sn(g[1,^t],k−1)Pn(g[^t+1,~d])]. (25)

The vectors can be calculated recursively using (A-A). The recursion ends with the base case .

This leads us to the following algorithm for calculating an optimal approximation for a signal . Notice that this algorithm provides us also with the parameterization of a piecewise polynomial.

1. Calculate for .

2. For do

• Calculate for using (A-A) and (25).

3. Calculate using (A-A) and (25).

Denoting by the worst case complexity of calculating for any pair , we have that the complexity of step 1) is ; of step 2) is , as the computation of the projection error is of complexity ; and of step 3) . Summing all together we get a total complexity of for the algorithm, which is a polynomial complexity since is polynomial.

## Appendix B The Block GAPN Algorithm

For approximating (20), we extend the GAPN technique  to block sparsity and adapt it to our model. It is presented in Algorithm 2. Notice that this program, unlike SSCoSaMP, does not assume the knowledge of or the existence of an optimal projection onto the signals’ low dimensional union of subspaces. Note also that it suits a general form of overparameterization and not only 1D piecewise polynomial functions. It is possible to accelerate BGAPN for highly scaled problems by removing from the cosupport several elements at a time instead of one in the update cosupport stage.

Ideally, we would expect that after several iterations of updating the cosupport in BGAPN we would have . However, many signals are only nearly cosparse, i.e., have significantly large values in while the rest are smaller than a small constant . Therefore, a natural stopping criterion in this case would be to stop when the maximal value in is smaller than . This is the stopping criterion we use throughout this paper for BGAPN. Of course, this is not the only option for a stopping criterion, e.g. one may look at the relative solution change in each iteration or use a constant number of iterations if is foreknown.

We present also a modified version of BGAPN in Algorithm 3 that imposes a continuity constraint on the change points. This is done by creating a binary diagonal matrix such that in each iteration of the program the -th element is if it corresponds to a change point and zero otherwise. This matrix serves as a weights matrix to penalize discontinuity in the change point. This is done by adding the regularizing term

 γ∥∥ ∥ ∥∥WΩ[X1,…Xn]⎡⎢ ⎢⎣~b1⋮~bn⎤⎥ ⎥⎦∥∥ ∥ ∥∥22

to the minimization problem in (26) (Eq. (27) in Algorithm 3), which leads to the additional step (Eq. (28)) in the modified program.

## References

•  A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Review, vol. 51, no. 1, pp. 34–81, 2009.
•  R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Inf. Theory., vol. 49, no. 12, pp. 3320–3325, Dec. 2003.
•  T. Blumensath and M. Davies, “Sampling theorems for signals from the union of finite-dimensional linear subspaces,” IEEE Trans. Inf. Theory., vol. 55, no. 4, pp. 1872 –1882, april 2009.
•  Y. M. Lu and M. N. Do, “A theory for sampling signals from a union of subspaces,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2334 –2345, Jun. 2008.
•  P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 12, no. 7, pp. 629–639, Jul 1990.
•  L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259–268, 1992.
•  V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997.
•  J. Weickert, Anisotropic diffusion in image processing.    Teubner, Stuttgart, Germany, 1998.
•  D. Needell and R. Ward, “Stable image reconstruction using total variation minimization,” SIAM Journal on Imaging Sciences, vol. 6, no. 2, pp. 1035–1058, 2013.
•  S. Nam, M. E. Davies, M. Elad, and R. Gribonval, “The cosparse analysis model and algorithms,” Appl. Comput. Harmon. Anal., vol. 34, no. 1, pp. 30 – 56, 2013.
•  R. Giryes, S. Nam, M. Elad, R. Gribonval, and M. E. Davies, “Greedy-like algorithms for the cosparse analysis model,” Linear Algebra and its Applications, vol. 441, no. 0, pp. 22 – 60, Jan. 2014, special Issue on Sparse Approximate Solution of Linear Systems.
•  T. Nir and A. Bruckstein, “On over-parameterized model based TV-denoising,” in Signals, Circuits and Systems, 2007. ISSCS 2007. International Symposium on, vol. 1, July 2007, pp. 1–4.
•  T. Nir, A. M. Bruckstein, and R. Kimmel, “Over-parameterized variational optical flow,” International Journal of Computer Vision, vol. 76, no. 2, pp. 205–216, 2008.
•  G. Rosman, S. Shem-Tov, D. Bitton, T. Nir, G. Adiv, R. Kimmel, A. Feuer, and A. M. Bruckstein, “Over-parameterized optical flow using a stereoscopic constraint,” in Scale Space and Variational Methods in Computer Vision, ser. Lecture Notes in Computer Science, A. M. Bruckstein, B. M. Haar Romeny, A. M. Bronstein, and M. M. Bronstein, Eds.    Springer Berlin Heidelberg, 2012, vol. 6667, pp. 761–772.
•  R. Giryes, S. Nam, R. Gribonval, and M. E. Davies, “Iterative cosparse projection algorithms for the recovery of cosparse vectors,” in The 19th European Signal Processing Conference (EUSIPCO-2011), Barcelona, Spain, 2011.
•  R. Giryes and M. Elad, “CoSaMP and SP for the cosparse analysis model,” in The 20th European Signal Processing Conference (EUSIPCO-2012), Bucharest, Romania, 2012.
•  T. Peleg and M. Elad, “Performance guarantees of the thresholding algorithm for the Co-Sparse analysis model,” submitted to IEEE Trans. on Information Theory.
•  E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall, “Compressed sensing with coherent and redundant dictionaries,” Appl. Comput. Harmon. Anal., vol. 31, no. 1, pp. 59 – 73, 2011.
•  S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Signal Process., vol. 53, no. 7, pp. 2477–2488, July 2005.
•  J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572 – 588, 2006, sparse Approximations in Signal and Image Processing.
•  J. A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,” Signal Processing, vol. 86, no. 3, pp. 589 – 602, 2006, sparse Approximations in Signal and Image Processing.
•  D. Wipf and B. Rao, “An empirical bayesian strategy for solving the simultaneous sparse approximation problem,” IEEE Trans. Signal Process., vol. 55, no. 7, pp. 3704–3716, July 2007.
•  M. Mishali and Y. C. Eldar, “Reduce and boost: Recovering arbitrary sets of jointly sparse vectors,” IEEE Trans. Signal Process., vol. 56, no. 10, pp. 4692–4702, Oct. 2008.
•  M. Fornasier and H. Rauhut, “Recovery algorithms for vector-valued data with joint sparsity constraints,” SIAM Journal on Numerical Analysis, vol. 46, no. 2, pp. 577–613, 2008.
•  M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006.
•  Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” Information Theory, IEEE Transactions on, vol. 55, no. 11, pp. 5302–5316, Nov 2009.
•  M. Stojnic, F. Parvaresh, and B. Hassibi, “On the reconstruction of block-sparse signals with an optimal number of measurements,” Signal Processing, IEEE Transactions on, vol. 57, no. 8, pp. 3075–3085, Aug 2009.
•  Y. C. Eldar, P. Kuppinger, and H. Bolcskei, “Block-sparse signals: Uncertainty relations and efficient recovery,” Signal Processing, IEEE Transactions on, vol. 58, no. 6, pp. 3042–3054, June 2010.
•  R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde, “Model-based compressive sensing,” Information Theory, IEEE Transactions on, vol. 56, no. 4, pp. 1982–2001, April 2010.
•  M. A. Davenport, D. Needell, and M. B. Wakin, “Signal space cosamp for sparse recovery with redundant dictionaries,” IEEE Trans. Inf. Theory, vol. 59, no. 10, pp. 6820–6829, Oct 2013.
•  R. Giryes and D. Needell, “Greedy signal space methods for incoherence and beyond,” Appl. Comput. Harmon. Anal., vol. 39, no. 1, pp. 1 – 20, Jul. 2015.
•  ——, “Near oracle performance and block analysis of signal space greedy methods,” Journal of Approximation Theory, vol. 194, pp. 157 – 174, Jun. 2015.
•  S. Nam, M. E. Davies, M. Elad, and R. Gribonval, “Recovery of cosparse signals with greedy analysis pursuit in the presence of noise,” in 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Dec 2011, pp. 361–364.
•  J. M. Morel and S. Solimini, Variational Methods in Image Segmentation.    Cambridge, MA, USA: Birkhauser Boston Inc., 1995.
•  T. Chan and J. Shen, Image Processing and Analysis.    Society for Industrial and Applied Mathematics, 2005.
•  J. Weickert, A. Bruhn, T. Brox, and N. Papenberg, “A survey on variational optic flow methods for small displacements,” in Mathematical Models for Registration and Applications to Medical Imaging, ser. Mathematics in industry.    Springer Berlin Heidelberg, 2006, vol. 10, pp. 103–136.
•  G. Aubert and P. Kornprobst,

Mathematical problems in image processing: partial differential equations and the calculus of variations

.    Springer, 2006, vol. 147.
•  S. Shem-Tov, G. Rosman, G. Adiv, R. Kimmel, and A. M. Bruckstein, “On globally optimal local modeling: From moving least squares to over-parametrization,” in Innovations for Shape Analysis.    Springer, 2013, pp. 379–405.
•  G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constructive Approximation, vol. 50, pp. 57–98, 1997.
•  M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,” Inverse Problems, vol. 23, no. 3, pp. 947–968, June 2007.
•  S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998.
•  D. L. Donoho and M. Elad, “On the stability of the basis pursuit in the presence of noise,” Signal Process., vol. 86, no. 3, pp. 511–532, 2006.
•  R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. B, vol. 58, no. 1, pp. 267–288, 1996.
•  S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, pp. 3397–3415, 1993.
•  S. Chen, S. A. Billings, and W. Luo, “Orthogonal least squares methods and their application to non-linear system identification,” International Journal of Control, vol. 50, no. 5, pp. 1873–1896, 1989.
•  G. Davis, S. Mallat, and M. Avellaneda, “Adaptive time-frequency decompositions,” Optical Engineering, vol. 33, no. 7, pp. 2183–2191, July 1994.
•  D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301 – 321, May 2009.
•  W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Inf. Theory., vol. 55, no. 5, pp. 2230 –2249, May 2009.
•  T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Appl. Comput. Harmon. Anal., vol. 27, no. 3, pp. 265 – 274, 2009.
•  S. Foucart, “Hard thresholding pursuit: an algorithm for compressive sensing,” SIAM J. Numer. Anal., vol. 49, no. 6, pp. 2543–2563, 2011.
•  R. Giryes and M. Elad, “Can we allow linear dependencies in the dictionary in the synthesis framework?” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013.
•  ——, “Iterative hard thresholding for signal recovery using near optimal projections,” in 10th Int. Conf. on Sampling Theory Appl. (SAMPTA), 2013.
•  ——, “OMP with highly coherent dictionaries,” in 10th Int. Conf. on Sampling Theory Appl. (SAMPTA), 2013.
•  ——, “RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT,” IEEE Trans. Signal Process., vol. 60, no. 3, pp. 1465–1468, March 2012.
•  E. J. Candès, “Modern statistical estimation via oracle inequalities,” Acta Numerica, vol. 15, pp. 257–325, 2006.
•  J. Savage and K. Chen, “On multigrids for solving a class of improved total variation based staircasing reduction models,” in Image Processing Based on Partial Differential Equations, ser. Mathematics and Visualization, X.-C. Tai, K.-A. Lie, T. Chan, and S. Osher, Eds.    Springer Berlin Heidelberg, 2007, pp. 69–94.
•  D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Communications on Pure and Applied Mathematics, vol. 42, no. 5, pp. 577–685, 1989.
•  L. Ambrosio and V. M. Tortorelli, “Approximation of functional depending on jumps by elliptic functional via t-convergence,” Communications on Pure and Applied Mathematics, vol. 43, no. 8, pp. 999–1036, 1990.
•  P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” International Journal of Computer Vision, vol. 59, no. 2, pp. 167–181, 2004.
•  D. Jupp, “Approximation to data by splines with free knots,” SIAM Journal on Numerical Analysis, vol. 15, no. 2, pp. 328–343, 1978.
•  E. J. Candès and D. L. Donoho, “Curvelets? a surprisingly effective nonadaptive representation for objects with edges,” in Curves and Surface Fitting: Saint-Malo 99, C. R. A. Cohen and L. Schumaker, Eds.    Vanderbilt University Press, Nashville, 2000, p. 105?120.
•  D. L. Donoho, “Sparse components of images and optimal atomic decompositions,” Constructive Approximation, vol. 17, no. 3, p. 353?382, 2001.
•  T. F. Chan and L. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, Feb 2001.
• 

T. Han, S. Kay, and T. Huang, “Optimal segmentation of signals and its application to image denoising and boundary feature extraction,” in

IEEE International Conference on Image Processing (ICIP)., vol. 4, oct. 2004, pp. 2693 – 2696 Vol. 4.