1 Introduction
Cadzow’s algorithm [4] was proposed by J. A. Cadzow in 1988. The algorithm was initially developed for signal denoising, but has already been extended to many other applications, including time series forecasting [13], the filtering of digital terrain models [11], and seismic data denoising and reconstruction [22, 21, 23, 24, 20]
. For simplicity, we consider the 1D signal denoising problem. Here the task is to estimate a target signal
from the corrupted measurementswhere
denotes additive noise. It is not hard to see that, without any additional assumptions, one cannot expect a universally better estimator than the one simply given by the noisy vector
. Thus, effective denoising methods typically rely on certain inherent simple structures hidden in the target signal. For example, the method of wavelet denoising assumes that the representation of under certain wavelet transform has many entries that are close to zero [7, 8]. By contrast, the Cadzow’s denoising is based on the low rank property of the Hankel matrix associated with the target signal.Let be a complexvalued vector of length . Let be a linear operator which maps into an matrix whose th entry is equal to , i.e.,
(1.1) 
where the numbering of vector and matrix entries starts from rather than the more standard . Matrices of the form (1.1
) are known as Hankel matrices in which each skewdiagonal is constant. Here we will refer to the Hankel matrix
as the Hankelization of . Furthermore, the MoorePenrose pseudoinverse of , denoted , is a linear map from matrices to vectors of length . Given a matrix , the vector is obtained by averaging each skewdiagonal of . More precisely, letting be the number of entries on the th skewdiagonal of an matrix,(1.2) 
the th entry of , denoted is given by
It follows immediately that , where is the identity operator from to .
We are now in position to introduce the Cadzow’s algorithm which is based on the fact that signals of interest arising from a wide range of applications have low rank Hankelization. Assume , where . Additive noise often increases the rank of and hence it is very typical that . Consequently, an intuitive way of estimating from is to do rank reduction. Starting from , the Cadzow’s algorithm iteratively updates the estimate via the following rule:
(1.3) 
Here computes the truncated SVD of an matrix, that is,
It is worth noting that the Cadzow’s algorithm is also known as multichannel singular spectrum analysis (MSSA) which was proposed by Broomhead and King in 1986 for the analysis of time series from dynamical system [3]. In particular, the first iteration of the Cadzow’s algorithm is usually referred to as singular spectrum analysis (SSA) which is also very important in many applications; see for example [15, 16, 12, 13].
We can also interpret the Cadzow’s algorithm as the method of alternating projections in the matrix domain. To this end, let and be the set of rank and Hankel matrices, respectively. For any matrix , it follows from the EckartYoung theorem that the projection of onto is given by . Moreover, it can be easily verified that the projection of onto is given by . Therefore, if we let , after multiplying on both sides of (1.3), it can be seen that the Cadzow’s algorithm is equivalent to
(1.4) 
where and denote the projection onto and , respectively.
As suggested in [20], the Cadzow’s algorithm can be adapted for signal recovery problems when there are missing entries. Recall that is our target signal. Suppose in some situations we are not able to observe all the entries of , but can only observe those entries with indices in , where is a subset of . Moreover, let denotes the samples of on , namely,
A natural question is whether it is possible to reconstruct from . This is an illposed problem without any restrictions on . However, when is low rank, it has been shown that the missing entries can be completed through convex methods [6] as well as nonconvex methods [5]. In [20], a variant of the Cadzow’s algorithm was proposed for signal recovery. The algorithm iteratively fills the missing entries in the following way: ,
(1.5) 
Roughly speaking, the algorithm refines the estimation of the unknown entries using the entries of on . In addition, when noise exists on the observed entries, i.e., under the measurement model , one can use an appropriate linear combination of and to estimate rather than simply filling back the noisy entries. Interested readers are referred to [20] for details.
1.1 Examples where low rank Hankelization appears
As shown previously, the low rank structure of Hankel matrices plays a key role in the development of the Cadzow’s algorithm and its variant. In this subsection we present several real examples where low rank Hankelization appears.
Time series satisfying an LRF
Let be a time series of length . We say satisfies a linear recurrent formula (LRF) if
(1.6) 
for some coefficients . Let be an integer which is usually referred to as the window length, and let . Define the lagged vectors . If we construct an matrix as follows
it is easy to see that is indeed a Hankel matrix and in fact we have . Moreover, since is a time series which obeys (1.6), is a matrix of rank at most . Time series satisfying an LRF is a very important model for many real applications, see [12] for more details.
Spectrally sparse signals
Consider the 1D spectrally sparse signal consisting of complex sinusoids without damping factors
(1.7) 
where is the normalized frequency, and is the corresponding complex amplitude. Let be the discrete samples of at . Letting , because is obtained from a spectrally sparse signal, it can be shown that [17, 27] admits the Vandermonde decomposition of the form
where
and is a diagonal matrix whose diagonal entries are . The Vandermonde decomposition of implies that provided and when .
Periodic stream of Diracs
In the previous two examples, low rank Hankelization occurs in the time domain. There are also examples where low rank Hankelization occurs in the Frequency domain. Consider the following periodic of Diracs studied in [2]:
(1.8) 
where and . Since is periodic, it can be represented using the Fourier series; that is
Let be a vector consisting of consecutive values of and define
(1.9) 
We can view as a discrete sample vector from . Noticing the similarity between (1.7) and (1.9), it is not hard to see that .
1.2 Main contributions and outline
The main contributions of this paper are twofold. Firstly, a Fast Cadzow’s algorithm has been developed through the introduction of an additional subspace projection; see Section 2. Numerical experiments in Section 3 establish the computational advantages of the Fast Cadzow’s algorithm over the Cadzow’s algorithm. Secondly, to address the nondecreasing MSE issue of the Cadzow’s and Fast Cadzow’s algorithms for signal denoising, a Gradient method and its accelerated variant are introduced in Section 4. The algorithms are then tested for different denoising problems to justify the effectiveness of the gradient algorithms. Section 5 concludes this paper with a short discussion.
2 Fast Cadzow’s algorithm
In general, the complexity of computing the SVD of an matrix is when and are proportional to , causing the computation of the SVD (i.e., the operation) to be the dominant cost in the Cadzow’s iteration (1.3). This has limited the computational efficiency of the Cadzow’s algorithm, especially for high dimensional problems. In this section, we present an accelerated variant of the Cadzow’s algorithm, dubbed Fast Cadzow’s algorithm, for the denoising problem as well as the recovery problem from missing entires; see Algorithm 1 for a formal description. We only focus on the discussion of the algorithm for the denoising problem since the same idea is used for the recovery problem.
Recall that the denoising problem is about estimating a signal from the noisy measurements . We assume the Hankel matrix associated with is rank . The Fast Cadzow’s iteration is overall similar to the Cadzow’s iteration, except that there is an additional projection in the Fast Cadzow’s algorithm, where denotes the projection onto a matrix subspace . In other words, after the Hankel matrix is constructed, we first project it onto , followed by the truncation to the best rank approximation. The motivation behind Algorithm 1 is that after is introduced the resulting matrix can be more structured. Thus it can be expected that the SVD of the matrix can be computed in a fast way. Notice that if we choose to be in each iteration, Algorithm 1 is indeed the Cadzow’s algorithm.
2.1 Choice for
Our choice of is inspired by the Riemannian optimization methods for low rank matrix recovery [26, 25], which is closely related to the manifold structure of low rank matrices. To motivate this choice, assume has already been computed via in the th iteration. Let . Noticing that is a matrix of rank , it has the reduced SVD , where and are and orthogonal matrices, respectively. In the th iteration, is selected to be the direct sum of the column and row subspaces of , i.e,.
(2.1) 
From the perspective of differential geometry, the set of fixed rank matrices form a smooth manifold and is the tangent space of the manifold at .
Given a matrix , the projection of onto is given by
(2.2) 
In the first iteration (i.e., ) of Algorithm 1, we simply choose . That is, the first iteration of the Fast Cadzow’s algorithm coincides with that of the Cadzow’s algorithm, which is an SSA step. Noting that and when is close to a Hankel matrix, we have in this situation. It follows that since . Therefore, the projection of onto can capture most of its energy and we can expect that the Fast Cadzow’s algorithm should exhibit similar behavior to the Cadzow’s algorithm. The numerical results in Section 3 will confirm this intuition.
2.2 Computational complexity
The true novelty of introducing the additional projection is that after this projection the SVD of the matrix can be computed more efficiently. In a nutshell, the SVD of an matrix can be reduced to the SVD of a matrix. In this subsection we will investigate the computational complexity of the Fast Cazdow’s algorithm, together with the details for computing the SVD.
For ease of exposition we split the single Fast Cadzow’s iteration into three steps:
(2.3) 
Letting which will not be formed explicitly, by (2.2), we have
(2.4) 
where , and . In a practical implementation will be stored in the form of (2.4). Since is a Hankel matrix, both and can be computed using fast matrixvector multiplications without forming . This costs flops assuming . Once and are known, , and can be computed using a few matrixmatrix products with flops. Thus, it requires flops to compute , and in (2.4).
Next we show how to reduce the SVD of to the SVD of a matrix starting from the decomposition (2.4). Let and
be the QR decompositions of
and , respectively. Then, it is not hard to see thatMoreover, we have
Let be the middle matrix and suppose its SVD is given by
Since both and are orthogonal matrices, the SVD of is given by
From the above discussion, we can see that computing from (or equivalently, computing the SVD of ) requires flops which account for the QR decompositions of and , and the SVD of . Moreover, when (i.e., matrices are square), nearly half computational costs and storage can be saved by using the Takagi factorization. Interested readers can find the details in [5].
It remains to investigate the cost for computing . Let be the SVD of which can be obtained by truncating the SVD of . We have
Noting that
where is defined in (1.2), can be computed by fast convolutions using flops. Thus, computing from requires flops.
Putting it all together, the dominant per iteration computational cost of the Fast Cadzow’s algorithm is . In addition, space is required to store the matrices appearing in the SVD and the QR decompositions.
2.3 High dimensional problems
We have presented the Cadzow’s and Fast Cadzow’s algorithms for the 1D problem. However, the algorithms can also work for high dimensional problems based on the multifold Hankel structures. For ease of exposition we discuss the twodimensional case briefly but emphasize that the situation in general higher dimensional cases is similar.
Let be a 2D signal, and let and be two pairs of positive numbers which satisfy and . The Hankel matrix corresponding to , denoted , is defined as follows:
where is the Hankel matrix for the th row of . In other words, we first form the Hankel matrix for each row of and then form the block Hankel matrix using the row Hankel matrices.
Assuming is a low rank matrix, the Cadzow’s and Fast Cadzow’s algorithms can be similarly developed for the 2D denoising and reconstruction problems. The details will be omitted. Moreover, there are real signals whose Hankel matrices are low rank, for example if
is a 2D spectrally sparse signal or a frequency slice of seismic data after the Fourier transform
[17, 27, 19].3 Numerical Experiments
In this section we evaluate the empirical performance of the Fast Cadzow’s algorithm against the Cadzow’s algorithm. In our implementations of the algorithms and (or and for higher dimensions) are chosen in such a way that the resulting Hankel matrix is approximately square. Numerical results demonstrate that the two algorithms exhibit similar denoising and reconstruction performance while the Fast Cadzow’s algorithm can be substantially faster. We first compare the algorithms on randomly generated spectrally sparse signals (Subsection 3.1) and then test them on problems arising from two applications (Subsections 3.2 and 3.3). The experiments are executed from Matlab 2018a on a desktop with two Inter i5 CPUs (1.60GHz and 1.80GHz respectively) and 8GB memory.
To utilize the fast Hankel matrixvector multiplication, we use the Krylov subspacebased SVD solver from the PROPACK package [18] to compute the partial SVD of in the Cadzow’s algorithm. It would be difficult to summarize the computational cost of such a method for computing the partial SVD and optimistically we can say that the cost is flops. Though it has the same order as the SVD method for the Fast Cadzow’s algorithm, it is worth noting that the constant hidden in is not clear since the performance of the Krylov’s method depends heavily on properties of the input matrix and on the amount of effort spent to stabilize the algorithm [14]. In contrast, the constants in the costs and for computing the SVD in the Fast Cadzow’s algorithm are exactly known since for example only comes from the computatioins of , and . Moreover, our numerical experiments show that the Fast Cadzow’s algorithm is still two to four times faster even after the fast partial SVD package is used for the Cadzow’s algorithm.
3.1 Spectrally sparse signal denoising and reconstruction
The definition of spectrally sparse signals can be found in Section 1.1. We first test the performance of the algorithms on signal denoising. The test signals are generated in the following way: the frequency of each harmonic is uniformly sampled from , and the argument of the weight is uniformly sampled from while the amplitude is chosen to be with being uniformly sampled from .
5  10  20  
MSE  Iter  Time  MSE  Iter  Time  MSE  Iter  Time  
Cadzow  3.16e02  10.3  0.48  4.17e02  11.1  0.80  6.15e02  11.4  1.75 
Fast Cadzow  3.16e02  10.3  0.19  4.17e02  11.1  0.38  6.15e02  11.4  0.81 
Cadzow  2.02e02  14.6  0.61  2.95e02  15.1  1.09  4.19e02  15.8  2.98 
Fast Cadzow  2.02e02  14.6  0.28  2.95e02  15.1  0.47  4.19e02  15.8  1.04 
Cadzow  0.70e02  13.1  10.45  0.96e02  13.8  18.18  1.35e02  14.0  40.98 
Fast Cadzow  0.70e02  13.1  4.61  0.96e02  13.8  9.66  1.35e02  14.0  20.25 
We consider the additive noise model in which a true signal is corrupted by a noisy vector in the form of
where is referred to the noise level and
follows the standard multivariate normal distribution. Letting
be the output of the algorithms, the mean squared error (MSE) defined bywill be used to evaluate their performance. Tests are conducted for 1D, 2D and 3D spectrally sparse signals with , , and . The Cadzow’s and Fast Cadzow’s algorithms are terminated whenever . For each pair of randomly generated problem instances are tested. Then we compute the average computation time, the average number of iterations and the average MSE for each algorithm; see Table 1. From the table we can see that the Cadzow’s and Fast Cadzow’s algorithms achieve similar MSE and it also takes them roughly the same number of iterations to converge. However, the Fast Cadzow’s algorithm is at least two times faster than the Cadzow’s algorithm since the additional subspace projection can reduce the computational cost of the SVD.
5  10  20  
MSE  Iter  Time  MSE  Iter  Time  MSE  Iter  Time  
Cadzow  8.47e11  35.9  1.85  9.09e11  38.0  3.08  1.10e10  41.1  8.43 
Fast Cadzow  8.47e11  35.9  0.61  9.09e11  38.0  1.35  1.10e10  41.1  3.45 
Cadzow  7.45e11  35.3  1.50  8.63e11  36.4  2.73  9.63e11  38.2  8.00 
Fast Cadzow  7.45e11  35.3  0.63  8.63e11  36.4  1.09  9.63e11  38.2  2.57 
Cadzow  7.12e11  34.0  30.08  7.55e11  34.1  47.98  9.02e11  34.2  108.86 
Fast Cadzow  7.12e11  34.0  11.61  7.55e11  34.1  22.76  9.02e11  34.2  47.28 
Next we compare the two algorithms on reconstruction problems when there are missing entries. The overall setup is similar to the denoising case except that instead of having being contaminated by additive noise we only observe of its entries uniformly at random. Note that the variant of the Cadzow’s algorithm for handling missing entries is presented in (1.5). In the tests the algorithms are terminated when . The average computational results over random simulations are presented in Table 2. Clearly, both of the algorithms are able to exactly recover the missing entries and the Fast Cadzow’s algorithm is substantially faster.
The algorithms can also handle missing entries and additive noise simultaneously. The basic idea is to make an appropriate linear combination between observed measurements and the update. More precisely, we consider the following variants of the algorithms:
(Cadzow)  (3.1)  
(Fast Cadzow)  (3.2) 
where is a tuning parameter. In our tests we choose as suggested by Gao et al. [9], and the algorithms are terminated when . The numerical results for this setting are summarized in Table 3. Again, the performance of the Fast Cadzow’s algorithm is comparable to that of the Cadzow’s algorithm but the former one is computationally more efficient.
5  10  20  
MSE  Iter  Time  MSE  Iter  Time  MSE  Iter  Time  
Cadzow  4.32e02  27.5  1.10  6.13e02  28.5  1.83  8.40e02  30.4  5.08 
Fast Cadzow  4.32e02  27.5  0.43  6.13e02  28.5  0.95  8.40e02  30.5  2.54 
Cadzow  2.86e02  27.0  1.13  4.02e02  27.6  1.77  5.91e02  28.8  4.35 
Fast Cadzow  2.86e02  27.0  0.50  4.02e02  27.7  0.84  5.91e02  29.2  1.84 
Cadzow  1.01e02  20.3  15.33  1.38e02  20.8  24.55  1.97e02  21.0  53.39 
Fast Cadzow  1.01e02  20.3  6.95  1.38e02  20.8  13.50  1.97e02  21.0  28.23 
3.2 Denoising in the reconstruction of periodic stream of Diracs
Methods for efficient signal acquisition and reconstruction are fundamental in signal processing. In [2], a novel paradigm has been proposed for sampling and reconstruction of a periodic stream of Diracs. The new paradigm can achieve the minimum samples at the signal’s rate of innovation. The overall sampling and reconstruction procedure is illustrated in Figure 1.
For simplicity consider the periodic stream of Diracs described in (1.8). Noticing that and () are the only unknowns in , the goal is to devise a sampling and reconstruction scheme such that and can be retrieved from about samples. In [2], the signal is first convolved with a sinc kernel of bandwidth where
is an odd integer and then the samples are obtained at a equallyspaced grid. This can be formally expressed as
(3.3) 
where , and is the Dirichlet kernel:
(3.4) 
A annihilating filter method has been proposed to reconstruct from the samples . A key ingredient in the method is the construction of a filter which can annihilate . Recall that is a periodic signal with discretized Fourier coefficients . It has been shown in [2] that the annihilation filter can be identified as a null space vector of a Toeplitz matrix involving a set of consecutive values of . Then the reconstruction of from the samples finally boil down to the problems of estimating from . To this end, it is shown in [2] that the Fourier coefficients of coincide with a subset of .
If there is no noise, the annihilating filter method is able to reconstruct exactly from the minimum number of measurements. However, noise is prevalent during the sampling process. We will consider the additive noise model; namely, is observed. When noise exists a common strategy is to oversample the measurements and then do denoising. This is where the Cadzow’s or Fast Cadzow’s algorithm plays a role. In Section 1.1 we have shown that the Hankel matrix corresponding to a subset of is low rank. Noticing the agreement between and , it follows immediately that the Hankel matrix associated with is also low rank. Therefore, the Cadzow’s or Fast Cadzow’s algorithm can be used for the task of denoising.
0.1  0.3  0.5  

MSE  Iter  Time  MSE  Iter  Time  MSE  Iter  Time  
Cadzow  5.62e02  16.35  0.0234  1.79e01  17.57  0.0251  3.29e01  18.43  0.0266 
Fast Cadzow  5.62e02  16.42  0.0055  1.79e01  17.69  0.0060  3.29e01  18.64  0.0065 
The denoising problem studied here differs from the one in the last subsection as the Hankelization of the signal is not low rank but the Hankelization of its Fourier coefficients is low rank. Next we compare the performance of the Cadzow’s and Fast Cadzow’s algorithms in this situation following the setup from [2]. Tests are conducted for with , where is uniformly sampled from and is uniformly sampled from . The window length of the sinc kernel is chosen to and samples are observed. We test three different noise levels and 1500 randomly generated problem instances are simulated for each noise level. The algorithms are terminated when . The average computational results are shown in Table 4. Even though both of the algorithms are very fast in the experiments due to the small problem size, the Fast Cadzow’s algorithms is about four times faster.
3.3 Seismic data denoising and reconstruction
Seismic denoising and seismic recovery from missing traces are two major tasks in seismic data processing and different techniques have been developed. The Cadzow’s algorithm which was called the  eigen filtering has been proposed to attenuate random noise from seismic records by Trickett [22, 21, 23, 24, 20]. It is based on the idea that if seismic data consists of linear events then the associated Hankel matrices of the constantfrequency slices are of rank . The algorithm was extended in [20] to the problem of reconstructing missing traces (see (1.5)), which can be viewed as a nonconvex version of the method of projection onto convex sets (POCS, [1]). Moreover, a generalized version of the algorithm was also presented in [20] for simultaneously denoising and reconstruction of the seismic data, which is actually the one listed in (3.1).
As an accelerated variant of the Cadzow’s algorithm, the Fast Cadzow’s algorithm is equally applicable for seismic data processing. We compare the performance of the two algorithms on a 4D volume with traces and each trace contains time samples. Thus, this is a 5D data array with one time dimension and four spatial dimensions. The data consists of three linear events, so we set in the algorithms. Tests have been conducted for the denoising problem with as well as for the recovery problem with 50% missing traces. As is typical in the literature on seismic data processing, the algorithms are run for a total number of iterations. The MSE and computational time are presented in Table 5. In addition, the seismic profile of one data slice with three fixed spatial dimensions has been plotted in Figure 2 and 3 for seismic denoising and recovery, respectively. From the table and the figures we can see that the two algorithms have comparable performance. However, the table does show that the MSE of the Fast Cadzow’s algorithm is slightly better than that of the Cadzow’s algorithm for seismic denoising. Moreover, the Fast Cadzow’s algorithm is two times faster for the denoising problem and four times faster for the recovery problem.
Seismic denoising  Seismic recovery  

MSE  Time  MSE  Time  
Cadzow  9.58e02  78.34  9.88e04  139.13 
Fast Cadzow  8.88e02  37.08  9.88e04  34.76 
4 A gradient variant for denoising
4.1 Motivation and new algorithms
When applying the Cadzow’s algorithm or the Fast Cadzow’s algorithm for signal denoising, it has been observed that the MSE does not always decrease as the algorithm iterates. For example, in [23] Trickett notes that
Cadzow recommended iterating between the rank reduction and averaging steps, but I have not found it to give better results for this application.
Gillard also notes in his paper [10] that
In the simulation study within this paper, it has been demonstrated that repeated iterations of Cadzow’s basic algorithm (in an attempt to separate the noise from the signal) may result in an increased RMSE from the true signal.
As an illustration, the MSE plots corresponding to two random tests on 1D spectrally sparse signal denoising with , , and are presented in Figure 4. Apparently, the MSE in the left plot decreases but the MSE in the right plot increases.
To explain this phenomenon, we will study the equivalent form of the Cadzow’s algorithm in (1.4). It is trivial that the update can be rewritten as
where and . Thus, the Cadzow’s algorithm can be interpreted as a projected gradient method for the following optimization problem
(4.1) 
though with a step length . Note that the objective function in the above optimization problem is equal to
where is the number of entries on the th skewdiagonal of an matrix; see (1.2). Thus, the Cadzow’s algorithm for signal denoising indeed solves an optimization problem which puts different weights on different entries of the signal. Moreover, the Fast Cadzow’s algorithm should share the same property as the Cadow’s algorithm. Since the weights for the middle entries are larger than those at the two ends, one may expect that after the Cadzow’s or Fast Cadzow’s denoising the componentwise MSE (defined as ) of the middle entries should be smaller. The plot in Figure 5 confirms this speculation, which shows the average componentwise MSE out of 1500 random tests after applying the Cadow’s and Fast Cadzow’s algorithms for 1D spectrally sparse signal denoising with , , and .
In order to address the unbalanced weight issue, we consider the following optimization problem with reweighted objective function:
(4.2) 
where means taking the squareroot of each entry of , denotes the componentwise product and
A projected gradient method with step length can then be developed for (4.2) as follows:
The equivalent form of the algorithm in the vector domain, dubbed Gradient method, is presented in Algorithm 2, where .
Of course we can use the same technique as in the Fast Cadzow’s algorithm to accelerate Algorithm 2, leading to the Fast Gradient method; see Algorithm 3. In the remainder of this section we compare the algorithms on different signal denoising problems, focusing on the denoising performance of the algorithms. About the computation efficiency, it is clear that the dominant per iteration computational cost of the Gradient method is the same as that of the Cadzow’s algorithm, and the dominant per iteration computational cost of the Fast Gradient method is the same as that of the Fast Cadzow’s algorithm.
4.2 Empirical evaluations
We first compare the four algorithms, Cadzow, Fast Cadzow, Gradient and Fast Gradient, on spectrally sparse signal denoising (see Section 3.1) with , and noise level . All the algorithms are run for the maximum of iterations. For each , 1500 random problem instances are tested. The average per iteration MSE of each algorithm against the number of iterations is presented in Figure 6. Overall the MSE of the Cadzow’s and Fast Cadzow’s algorithms for the 1D denoising problem increases as the algorithm iterate. In contrast, the MSE of the Gradient and Fast Gradient methods decreases as the algorithms iterate. For the 2D and 3D denoising problems, though on average the MSE of all the algorithms shows a decreasing trend, the Gradient and Fast Gradient methods can achieve smaller MSE.
In addition, we also examine each individual test and consider one to be a positive test if the MSE of an algorithm at the last iteration is smaller than that at the first iteration. The portion of positive tests for each test algorithm is presented in Table 6. It can be observed that the number of positive tests of the Gradient and Fast Gradient methods is larger than that of the Cadzow’s and Fast Cadzow’s algorithms, especially for the 1D denoising problem.
N = 256  N =1616  N = 161616  

Cadzow  0.4200  0.8300  0.9907 
Fast Cadzow  0.4193  0.8293  0.9907 
Gradient  0.9947  1.0000  1.0000 
Fast Gradient  0.9947  1.0000  1.0000 
It is also interesting to compare the average componentwise MSE of the outputs of the four algorithms; see Figure 7 for the 1D case. The figure shows that, to some extent, the Gradient and Fast Gradient methods are able to alleviate the issue of unbalanced weights over different entries.
Next we test the algorithms for the denoising problem arising from the reconstruction of Diracs; see Section 3.2. Roughly speaking, the Fourier transform of the test signal has a low rank Hankelization. Thus we first do denoising in the Frequency domain and then get the denoised signal via the inverse FFT. Here we repeat 1500 random tests for the noise level . The average MSE against the number of iterations is plotted in Figure 8. Meanwhile, the portion of positive tests of each algorithm is contained in Table 7. Obviously, the average MSE of the Gradient and Fast Gradient methods decreases while the MSE of the Cadzow’s and Fast Cadzow’s algorithms increases, and there are more positive tests after the Gradient and Fast Gradient denoising. Moreover, Figure 8 also shows the denoising results of the four different algorithms in the time domain from a single random test. It can be seen that the Gradient and Fast Gradient methods exhibit better denoising performance in the region where .
Cadzow  Fast Cadzow  Gradient  Fast Gradient  
N = 71  0.3013  0.2887  0.8167  0.8240 
Lastly, we compare the algorithms on the seismic denoising problem from Section 3.3. The MSE plot against the number of iterations is presented in Figure 9. The plot shows that overall all the algorithms have decreasing MSE, but the MSE of the Gradient and Fast Gradient methods is smaller. It is also worth noting that for seismic data denoising the accelerated algorithms have noticeable smaller MSE than their nonaccelerated counterparts. Moreover, the seismic profile of a data slice after the Gradient and Fast Gradient denoising is presented in Figure 10 to help visualize their denoising performance.
5 Conclusion
In this paper we consider the signal denoising and recovery problems for signals corresponding to low rank Hankel matrices. New algorithms have been proposed which can complete the tasks more efficiently and effectively. For future work, we would like to study the theoretical convergence analysis of the proposed algorithms under certain random models. It may also be interesting to see whether it is possible to design better adaptive reweighting schemes for the gradient methods.
Acknowledgments
HW and KW would like to thank Jianjun Gao for sending us the 5D data for the seismic denoising and recovery tests.
References

[1]
R. Abma and N. Kabir,
3d interpolation of irregular data with a pocs algorithm
, Geophysics, 71 (2006), pp. E91–E97.  [2] T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, Sparse sampling of signal innovations, IEEE Signal Processing Magazine, 25 (2008), pp. 31–40.
 [3] D. S. Broomhead and G. P. King, Extracting qualitative dynamics from experimental data, Physica D: Nonlinear Phenomena, 20 (1986), pp. 217–236.
 [4] J. A. Cadzow, Signal enhancement – a composite property mapping algorithm, IEEE Transactions on Acoustics, Speech, and Signal Processing, 36 (1988), pp. 49–62.
 [5] J.F. Cai, T. Wang, and K. Wei, Fast and provable algorithms for spectrally sparse signal reconstruction via lowrank hankel matrix completion, Applied and Computational Harmonic Analysis, 46 (2019), pp. 94–121.
 [6] Y. Chen and Y. Chi, Robust spectral compressed sensing via structured matrix completion, IEEE Transactions on Information Theory, 60 (2014), pp. 6576–6601.
 [7] D. L. Donoho, Denoising by softthresholding, IEEE Transactions on Information Theory, 41 (1995), pp. 613–627.
 [8] D. L. Donoho and J. I. M., Minimax estimation via wavelet shrinkage, The Annals of Statistics, 26 (1998), pp. 879–921.
 [9] J. Gao, M. D. Sacchi, and X. Chen, A fast reducedrank interpolation method for prestack seismic volumes that depend on four spatial dimensions, Geophysics, 78 (2013), pp. V21–V30.
 [10] J. Gillard, Cadzow’s basic algorithm, alternating projections and singular spectrum analysis, Statistics and its Interface, 3 (2010), pp. 335–343.
 [11] N. Golyandina, I. Florinsky, and K. Usevich, Filtering of digital terrain models by 2d singular spectrum analysis, International Journal of Ecology & Development, 8 (2007), pp. 81–94.
 [12] N. Golyandina, V. Nekrutkin, and A. A. Zhigljavsky, Analysis of time series structure: SSA and related techniques, Chapman and Hall/CRC, 2001.
 [13] N. Golyandina and D. Stepanov, SSAbased approaches to analysis and forecast of multidimensional time series, in proceedings of the 5th St. Petersburg workshop on simulation, vol. 2, 2005, pp. 293–298.
 [14] N. Halko, P. G. Martinsson, and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review, 53 (2011), pp. 217–288.
 [15] H. Hassani and D. Thomakos, A review on singular spectrum analysis for economic and financial time series, Statistics and its Interface, 3 (2010), pp. 377–397.
 [16] H. Hassani and A. Zhigljavsky, Singular spectrum analysis: methodology and application to economics data, Journal of Systems Science and Complexity, 22 (2009), pp. 372–394.
 [17] Y. Hua, Estimating twodimensional frequencies by matrix enhancement and matrix pencil, IEEE Transactions on Signal Processing, 40 (1992), pp. 2267–2280.
 [18] R. Larsen, PROPACKsoftware for large and sparse SVD calculations, version 2.1. http://sun.stanford.edu/~rmunk/PROPACK/, Apr. 2005.
 [19] V. Oropeza, The singular spectrum analysis method and its application to seismic data denoising and reconstruction. M.Sc. thesis, University of Alberta, 2010.
 [20] V. Oropeza and M. Sacchi, Simultaneous seismic data denoising and reconstruction via multichannel singular spectrum analysis, Geophysics, 76 (2011), pp. V25–V32.
 [21] V. E. Oropeza and M. D. Sacchi, Multifrequency singular spectrum analysis, in SEG Technical Program Expanded Abstracts 2009, Society of Exploration Geophysicists, 2009, pp. 3193–3197.
 [22] M. D. Sacchi, FX singular spectrum analysis, in CSPG CSEG CWLS Convention, 2009, pp. 392–395.
 [23] S. Trickett, Fxy cadzow noise suppression, in SEG Technical Program Expanded Abstracts 2008, Society of Exploration Geophysicists, 2008, pp. 2586–2590.
 [24] S. Trickett and L. Burroughs, Prestack rankreductionbased noise suppression, CSEG Recorder, 34 (2009), pp. 24–31.
 [25] B. Vandereycken, Lowrank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), pp. 1214–1236.
 [26] K. Wei, J.F. Cai, T. F. Chan, and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis and Applications, 37 (2016), pp. 1198–1222.
 [27] H. H. Yang and Y. Hua, On rank of block hankel matrix for 2d frequency detection and estimation, IEEE Transactions on Signal Processing, 44 (1996), pp. 1046–1048.
Comments
There are no comments yet.