I Introduction
Signal reconstruction from its sampled measurements is recognized as a fundamental theme of signal processing. Under some circumstances, these acquired measurements are incomplete due to costly experiments, hardware limitation, or other inevitable reasons. For example, for the purpose of fast data acquisition, nonuniform sampling is used to obtain partial entries of the timedomain signal in nuclear magnetic resonance (NMR) spectroscopy [1, 2, 3], which has been widely used in chemistry and biology. Recovering the full signal is essential for the next step of data analyses in these applications.
In this paper, the signal of interest can be modeled or approximated by a superposition of limited dimensional (D) exponentials, i.e.,
(1) 
where denotes a scalar function of variables; the exponential function for any () and (), and the complex amplitudes of the associated coefficient. This formulation implies that each component D exponential in (1) is an exponential function with respect to any variable. In particular, when , denotes a spectrally sparse signal [4, 5], i.e., a superposition of D undamped complex sinusoids. When with the damping factor , denotes a sum of D damped complex sinusoids. These two types of signals arise in various applications such as multipleinput multipleoutput (MIMO) radars [6], harmonic analysis [7, 8, 9], analogtodigital conversion [10], magnetic resonance imaging [11, 12], and NMR spectroscopy [1, 13]. In this paper, we study the problem of reconstructing D exponential signals from a small amount of measurements.
Recently, recovering a spectrally sparse signal becomes of great interest in signal processing community [4, 5, 9, 14, 15, 16, 17]. Among emerging approaches, the compressed sensing [18]
suggests to reconstruct a signal from its partial observations if it enjoys a sparse representation in some transform domain and the observation operator satisfies some incoherence conditions. The spectrally sparse signal can be sparse in the discrete Fourier transform domain if the frequencies are aligned well with the discrete frequencies and the number of exponentials is small. In this case, signals can be recovered from very few measurements by enforcing the sparsity in the discrete Fourier domain
[18]. However, true frequencies in practical applications generally take values on a continuous domain, and the resultant basis mismatch between the true frequencies and the discretized grid [19] leads to the loss of sparsity and hence worsens the performance of compressed sensing. For the purpose of addressing this problem, total variation or atomic norm [20] minimization methods were proposed to deal with signal recovery with continuousvalued frequencies [4, 5, 9, 14]. However, these methods [4, 5, 9, 14] work only for undamped complex sinusoids but not for generic exponential signals, such as damped complex sinusoids. Furthermore, being computationally expensive, they search the solution in a space whose dimension is the square of the dimension of the underlying signal, which is intractable for large scale problems (e.g., ).More recently, inspired by matrix pencil method [21] and matrix completion [22, 23], low rank Hankel matrix (LRHM) reconstruction [1, 26, 27, 25, 24, 28, 29] has been proposed to recover generic exponential signals. The LRHM was proposed in missing data recovery of nonuniformly sampling in realistic protein NMR spectroscopy [1], showing that broad peaks can be recovered much more reliably than minimizing the norm on the spectrum [2, 3, 30, 31]. The minimal number of samples for LRHM method is theoretically predicted when the measurements are taken with random Gaussian sampling [24]. However, LRHM is limited to recovering only 1D signals. The D exponential signal is possibly recovered by enhanced matrix completion (EMaC) [32] with theoretical guarantee of the stable recovery. Nonetheless, EMaC invokes minimization problems with a huge unknown matrix, prohibiting its applicability to D () exponential signals. Note that the structured matrix based methods [14, 32] become prohibitive even in the 3order tensor completion, because they solve the tensor completion problem by lifting to a search space whose dimension is on the order of the square of the dimension of the tensor. For example, for a signal of size , EMaC [32] needs to solve an optimization problem that involves a matrix of size . Therefore, how to recover ND (particularly ) exponential signals still remains challenging.
It can be easily checked from (1) that the signal of interest, viewed as an D tensor in the discrete domain, enjoys a low CANDECOMP/PARAFAC (CP) rank and a low rank if
is sufficiently small. Therefore, the exponential signal recovery can be formulated as a lowrank tensor completion. The lowrank tensor completion has been successfully applied to a large class of realworld problems, such as data analyses in computer vision
[33, 34], remote sensing [35], and electroencephalogram [36]. Since the rank of a tensor can be defined differently, there exist several types of lowrank tensor completion approaches, e.g., the lowCPrank tensor completion [34, 36, 37, 38] and the lowrank tensor completion [33, 35, 39, 40]. Although both lowCPrank and lowrank tensor completions can be applied to recover ND (N3) signals, they ignore the specific exponential structure, e.g. time domain signal in NMR spectroscopy, probably causing the requirement of an unnecessarily large number of observed entries for a stable recovery.
In this paper, we propose an approach to reconstruct the ND exponential signal satisfying (1) from a small amount of samples. The proposed approach simultaneously exploits the lowCPrank structure and the exponential structure of the associated factor vectors (See the definition in Section III). To enforce the former structure, we represent the signal in the CP decomposition form and imposed a least square fitting to the sampled data. To promote the latter structure, we penalize the nuclear norm of the Hankel matrices formed by factor vectors. We will verify, with comprehensive numerical experiments on simulated and realworld data, the effectiveness of the new method by comparisons with stateoftheart tensor completion methods.
The rest of this paper is organized as follows. In Section II, we will introduce the notations and related backgrounds. Section III converts the D exponential signal reconstruction to a lowrank tensor completion problem and reviews existing lowrank tensor completion methods. In Section IV, we will propose our Hankel matrix nuclear norm regularized tensor completion approach, including the algorithm, convergence and complexity analysis. Section V presents the experimental results on simulated and realworld data. Section VI discusses higherdimensional experiments, comparisons with other stateoftheart methods and some parameters. Section VII concludes this work and discusses the future work.
Ii Notations and Backgrounds
Notations and backgrounds of tensors are given below.
Iia Notations
Notations and nomenclatures of tensors are introduced following the review paper [41] and a summary of these notations is shown in Table I.
Tensor is the generalization of matrix to high dimensions. For a tensor , the number of dimensions is called the order, also known as way or mode. Throughout the paper, the th entry of a vector is denoted by , the element of a matrix is denoted by , and the element of an order tensor is denoted by .
Symbols  Notations 

scalar, vector, matrix, tensor  
matrix with column vectors  
moden matricization of tensor  
,  transpose, hermitian transpose 
moden fiber of tensor  
matrix slice of tensor 
Fibers and Slices
Fibers are the higherorder analogue of matrix rows and columns. The fiber is a collection of entries of the tensor by fixing all but one indices. The mode fibers are all vectors that are obtained by fixing the indices . Slices are D sections of a tensor, defined by fixing all but two indices.
Kronecker product
The Kronecker product of matrices and , denoted by , is defined by
(2) 
KhatriRao product
The KhatriRao product is the columnwise Kronecker product. Given two matrices and , their KhatriRao product satisfies
(3) 
where denotes Kronecker product.
Frobenius norm
The Frobenius norm of a tensor is the square root of the sum of the squares of the absolute value of each element, i.e.,
(4) 
IiB CP decomposition and tensor CPrank
factor and rank1 tensor
An order tensor is rank if it can be written as the outer product of vectors as follows
where the symbol denotes the vector outer product and the vector , for all , is called a factor. This means each element of the tensor is the product of the corresponding vector elements:
for all and .
CP decomposition and tensor CPrank
The CP decomposition [42] factorizes a tensor into a linear combination of rank tensors. A tensor is represented with CP decomposition as
(5) 
where is a positive integer and . The tensor CPrank is defined as the smallest number of rank tensors that composes the in (5).
Following [43], we combine the factors in CP decomposition (5) to form factor matrices
for , and the CP decomposition (5) is concisely expressed as
where and is called the Tucker operator [43]. Note that we can always rescale the factor matrices so that all the entries in are 1’s, and for simplicity, we will drop , i.e.,
IiC Tensor matricization and Tensor nrank
There are some other definitions of tensor rank. The rank is defined via tensor matricization (also known as unfolding), reordering the elements of a tensor into a matrix. The moden matricization of a tensor is denoted by , and the tensor element is mapped to the matrix element , where
(6) 
Therefore, with . With the help of KhatriRao product, the modei matricization can be expressed as
The rank of an D tensor is the tuple of the ranks of the mode unfoldings:
Iii D exponential signal reconstruction as a tensor completion
Without loss of generality, the frequencies in (1) can be normalized with respect to the Nyquist frequency and hence the measurements are sampled at integer values. Therefore, by sampling the signal (1) on a uniform grid, we can obtain an order tensor , and each entry can be expressed as
(7) 
for , , with , and . Fig. 2 shows a graphical illustration of (7) when , implying that each component, rank tensor in D exponential signal (7), can be written as the outer product of vectors that are of exponential structure. Here we assume .
In this paper, we aim to recover from its small subset of entries ,
(8) 
One can check that is lowrank in the sense of both CPrank and rank if is sufficiently small. Therefore, the problem of reconstructing the signal can be recast as a lowrank tensor completion. Depending on how the tensor rank is defined, we have a couple of possible D exponential signal reconstruction approaches using available lowrank tensor completion methods.
Iiia LowCPrank tensor completion
The signal enjoys a low CPrank. Actually, (7) implies the following CPdecomposition
(9) 
where and the factor matrices are
(10) 
with
(11) 
These equations imply that has a CPrank at most . Once is relatively small, is lowCPrank.
Therefore, reconstructing from its partial entries can be formulated as a lowCPrank tensor completion problem. There exist several methods available in the literature to solve this problem and generally they can be categorized into nonconvex and convex methods. The weighted CP (WCP) decomposition method [36, 37] is a typical nonconvex method, and it recovers tensors by solving the following nonconvex optimization
(12) 
where are the factor matrices, in which is an estimated rank. Regarding convex methods, the following optimization was proposed [38]
(13) 
where
is the tensor nuclear norm defined by
It was shown that the solution (13) will be if the number of known entries exceeds a certain amount [38]. Unfortunately, (13) is NPhard and computationally intractable. Recently some computational methods [44, 45] are proposed to approximate or equally achieve the tensor nuclear norm.
IiiB Lownrank tensor completion
The signal enjoys low rank as long as is small compared with , since the rank of the matricization of is at most if . Thus, recovering from its partial entries can be viewed as a lowrank tensor completion problem. One finds a tensor that has a small rank [33, 35]
(14) 
where is the weight and denotes the moden matricization.
One may approximate the nonconvex objective in (14) by a convex one. The only nonconvexity in (14) is the rank function. It is well known [22, 46]
that the best convex approximation of the rank function is the nuclear norm, i.e., the sum of singular values. Also, to handle data with noise, the linear constraint (
14) may be replaced by a least square fitting term. Altogether, one gets a convex optimization for lowrank tensor completion as follows(15) 
where denotes the nuclear norm of a matrix and is the regularization parameter. When for all , (15) is the convex model proposed in [35]. Lowrank tensor completion has been successfully applied in computer vision [33] and remote sensing data analyses [35].
Iv The proposed method
Though generic lowrank tensor completion methods discussed in Section III are applicable to our D exponential signal reconstruction, they ignore the specific exponential structure of the factor vectors. From (9), it is observed that each factor matrix defined in (10) is Vandermonde matrix and each factor vector in (11) is an exponential function. As a consequence, they will need unnecessarily large number of measurements for a stable reconstruction of . Take and as an example. As discussed in [38], (13) needs known entries to give a robust recovery for lowCPrank tensor completion, and matrix completion theory suggests that observed entries are necessary for a reliable recovery for lowrank tensor completion (15). However, there are only degree of freedoms in the signal model (7). Therefore, it is expected that, if we explore the exponential structure of the factor vectors, we can design an D exponential signal reconstruction method that requires much fewer measurements than generic lowrank tensor completion methods.
In the following, we propose an approach that utilizes the exponential structure of the factor vectors, in addition to the lowCPrank structure of the signal.
Iva The proposed model
We will use the lowCPrank structure of the tensor , although the lowrank property is applicable as well. To promote the exponential structure of the factor vector, we enforce the Hankel matrix of each factor vector to be lowrank by nuclear norm. We propose the following reconstruction model,
(16) 
where is a regularization parameter that trades off the nuclear norm against the data consistency and is a linear operator defined as , for some integers and satisfying , as follows
The is chosen to make the Hankel matrix square or approximately square to minimize the reconstruction error [32]. We call the proposed model (16) Hankel Matrix nuclear norm Regularized lowCPrank Tensor Completion (HMRTC).
The proposed HMRTC simultaneously exploits the lowCPrank structure and the exponential structure of the associated factor vectors, which makes it superior to existing methods. 1) HMRTC utilizes the exponential structure of factor vectors, being demonstrated in Section V that HMRTC can significantly reduce the number of necessary samples, compared with generic lowrank tensor completion methods that do not consider the structure of the factor matrices. 2) HMRTC represents the unknown tensor in a CP decomposition form, significantly reducing the size of variables in numerical algorithms, compared with atomic norm minimization (e.g. [14]) and other lowrank structured matrix methods (e.g. EMaC [32]). In particular, as we will see in Section IVB, the numerical algorithm of HMRTC invokes matrices of size if . As a comparison, the standard alternating direction method of multipliers (ADMM) algorithm for solving atomic norm minimization and EMaC needs huge matrices of size . Consequently, our proposed HMRTC can easily reconstruct D exponential signals of size , which, however, is a prohibitive task for EMaC or atomic norm minimization.
We conclude this subsection by a toy example to demonstrate the great potential of the proposed HMRTC. It is well known that neither the lowrank tensor completion nor WCP can recover missing slices [22, 36] since the information is unknown in the missing part. On the contrary, HMRTC is able to estimate these missing data because the exponential structure of one single factor is exploited. Fig. 3 shows that HMRTC can perform well even when half of the slices are missing. Thus, HMRTC will be useful for those applications requiring recover the truncated data in the end of signals to improve the frequency resolution or increase the signaltonoise ratio [1, 47].
IvB Numerical algorithm
By combining the unknown factors into factor matrices for , Eq. (16) is rewritten concisely as
(17) 
where extracts the th column from for and .
Recently, it has been shown in [34] and [48] that the ADMM is very efficient for some convex or nonconvex problems in various applications. To solve (17), we also propose an algorithm based on ADMM. Some auxiliary variables, , and are introduced and then (17) is reformulated into the following equivalent form:
(18)  
For ease of presentation, we define
and
The augmented Lagrangian function of (18) is
(19)  
where is the matrix of Lagrange multipliers for and .
The ADMM is an iterative algorithm. Given , , and at step , it updates , , and as follows.
The variable is updated by solving the following optimization,
(20) 
Due to the multilinearity of the CP decomposition, it is not easy to solve (20) exactly. Here, we employ an alternating minimization procedure to solve (20) approximately. Fixing , we solve (20) with respect to , which is a convex optimization as follows:
(21)  
where ; and are the kth update of and ; and are the mode matricization of the tensors and respectively. It is obvious (21) is a least squares problem, and therefore its solution is a solution of linear system. In particular, if we define , then (21) is rewritten as
(22)  
whose solution satisfies
(23)  
The closedform of are derived in Appendix A.
Update by
(27) 
IvC Convergence analysis
We provide the convergence of Algorithm 1 stated in Theorem 1 and Theorem 2. From the theorems, we know that the sequence generated by Algorithm 1 converges. Furthermore, if we further impose some condition on the Lagrange multipliers , then the limit is a critical point of (17).
Theorem
The sequence generated by Algorithm 1 is a Cauchy sequence.
Theorem
If , for all , , then the limit of satisfies the KKT condition for (17).
Proofs of Theorem 1 and Theorem 2 are added in Appendix C.
IvD Complexity analysis
The computational complexity of HMRTC is analyzed here. Besides the typical tensor operation such as Tucker operation, the running time of Algorithm 1
is dominated by the singular value decomposition (SVD) for the singular value thresholding operator in (
26). Consider to recover a tensor with . The SVD of , which is of a small size , can be done in operations. Since we have SVDs to compute at each iteration, the total computational complexity for SVD in each iteration is , which is only sublinear with the tensor size when . Furthermore, the SVDs in a single iteration can be computed in parallel, since each , , , is updated independently. Therefore, HMRTC has the potential to be applied in D exponential signal recovery shown in Section VIA. In addition, from the spatial complexity point of view, the storage of HMRTC can be significantly smaller than that of the original tensor when the estimated tensor rank is much smaller than .V Experimental Results
In this section, we will evaluate the proposed HMRTC on simulated exponential signals, including undamped and damped complex sinusoids, and real NMR spectroscopy data. Two stateoftheart algorithms of lowrank tensor completion, the alternating direction methodbased tensor recovery (ADMTR) [35] and WCP [36], are compared with HMRTC.
The parameters of HMRTC are listed in Algorithm 1. For ADMTR and WCP, the maximum number of iterations is . In ADMTR, the parameters are and . The alternating least squares method is applied to solve WCP and the algorithm is terminated when , where is the objective function value in (12) after iteration .
Va Experiment setup for simulated data
The proposed algorithm is tested on D simulated signals as follows. The frequency is uniform randomly drawn from the interval for all and , where is the number of exponentials. The coefficient is generated by and the damping factor is generated by , where and follow the uniformly random distribution on . The undamped complex sinusoid is synthesized as
(28) 
and damped complex sinusoid is simulated as
(29) 
for and =1, 2, 3.
In each experiment, the simulated groundtruth tensor
is normalized by dividing the maximum magnitude of its entries. Gaussian white noise with standard deviation
is added on both real and imaginary parts of .We denote by the reconstruction of HMRTC, i.e., the output of Algorithm 1, and the relative least normalized error (RLNE) is defined as
(30) 
The average RLNE is calculated by averaging the RLNEs over Monte Carlo trials as conducted in [33, 51]. To facilitate comparison in color map, the average RLNE is set to be if it is larger than . In each trial, the observed entries are sampled in the uniformly random fashion and the sampling ratio () is denoted as the proportion of available data to the full data.
The numerical experiments are conducted on a Dell PC running Windows 7 operating system with Intel Core i7 2600 CPU and GB RAM. The average computation time for ADMTR, WCP and HMRTC to recover simulated signals with and is s, s and s, respectively. The average memory requirements for ADMTR, WCP and HMRTC is GB, GB and GB, respectively.
VB Robustness to the estimated tensor rank
We first evaluate the robustness of WCP and HMRTC to the estimated tensor rank. ADMTR is not compared since it does not require an estimation of the tensor rank.
The comparison is presented in Fig. 4. It is observed that the HMRTC can always achieve low reconstruction errors (RLNE0.1) as increases from to , while WCP fails to recover if is overestimated too much. In particular, when the ground truth , as shown in Fig. 4(b), WCP can work only when but HMRTC is robust to any chosen from to . Hence, HMRTC will be more useful for those applications where it is difficult to estimate the true number of exponentials. In the following simulated experiments, we set in WCP and in HMRTC.
VC Recovery of simulated complex sinusoids
In this subsection, undamped and damped complex sinusoids are simulated to evaluate the construction performance. The regularization parameter in ADMTR is set be , and be and in HMRTC for data with noise level and , respectively.
Fig. 6 shows that HMRTC yields an average RLNE that is much smaller than ADMTR and WCP for all ’s and ’s, no matter in recovering undamped or damped sinusoids. Furthermore, for a fixed , HMRTC needs a much smaller than ADMTR and WCP to achieve an average RLNE within noise level, implying that HMRTC requires a much smaller number of sampled entries for the stable recovery than ADMTR and WCP.
Fig. 5
further examines the stability of the proposed algorithm at different noise levels. It is shown that HMRTC produces reconstructions with the lowest RLNE, as well as the smallest variance, among the three methods, implying that HMRTC is the most robust to noise. The spectra of selected fibers in the reconstruction in Fig.
5(c) and Fig. 5(d) indicate that HMRTC leads to most consistent spectra to the groundtruth.VD Recovery of real NMR spectroscopy data
NMR spectroscopy has been an indispensable tool in the study of structure, dynamics, and interactions of biopolymers in chemistry and biology. The duration of an ND NMR spectroscopy experiment is proportional to the number of measured data points, and the nonuniform sampling of time domain signal can dramatically reduce measurement time [1, 52, 53, 54].Here we apply the proposed HMRTC to recover full spectrum in fast NMR, since the time domain signal of NMR is generally modeled as damped complex sinusoids[1].
A D HNCO spectrum is tested and its sample is the U[N, C] RNA recognition motifs domain of protein RNA binding motif protein [55], which is a component of the spliceosome Acomplex. The ADMTR, WCP and HMRTC are compared in recovering this D spectrum with the size of 64128512 from a D Poissongap [52] nonuniformly sampled timedomain data. All the spectra are processed in NMRPipe [56] using a routine processing manner.
Fig. 7 shows that HMRTC leads to the most faithful recovery (Fig. 7(d)) of the ground truth (Fig. 7(a)) than ADMTR (Fig. 7(b)) and WCP (Fig. 7(c)). Moreover, Fig. 7(d) indicates that HMRTC can achieve high quality of reconstruction even with the sampling rate 10% and hence allow a significant reduction in measurement time. Thus, the proposed HMRTC algorithm may serve as a versatility method for studying biological or chemical molecules using ND NMR spectroscopy.
Vi Discussions
In the following, a RLNE smaller than is considered as a low reconstruction error since the corresponding energy loss is less than .
Via Higherdimensional experiments
A D experiment is conducted to explore the capability of HMRTC in higherdimensional exponential signal recovery. Fig. 8(a) indicates that HMRTC achieves a low reconstruction error although is as large as around times of ambient dimensions. The reconstruction error will increase dramatically when further increases (). Therefore, HMRTC has the potential to reconstruct the signal with a quite larger than ambient dimensions. However, reconstructing a signal with larger needs further development.
ViB Comparison with other stateoftheart methods
Another three stateoftheart methods, ADMMR [45], TNN [44] and FaLRTC [33], are compared with the proposed HMRTC on undamped complex sinusoid recovery. Since tensor nuclear norm is computationally intractable, the approximate solution was proposed recently in [44] by providing suboptimality guarantees. More recently, another computational method to equally achieve tensor nuclear norm was proposed in a preprint paper [45]. FaLRTC [33] is another typical method in tensor completion which minimizes low rank. Fig. 9 shows that HMRTC holds advantage over ADMMR [45], TNN [44] and FaLRTC [33] on achieving much lower reconstruction errors. Note that the estimated tensor rank is set to be exactly groundtruth number of exponentials in ADMMR [45], which may be unknown in practice.
ViC Success rate of factor vector recovery
The lowCPrank completion method may lead the factor vector to be consisted of single or multiple exponentials in real applications [59]. Considering the proposed model also explores the lowCPrank structure, one may wonder the possibility of the reconstructed factor vectors being a single exponential as it was introduced in (1). Here we discuss this possibility based on numerical experiments.
Following [60], the factor vector recovery is declared successful if
for all , where is the recovery of the ground truth . Here the empirical success rate is calculated by averaging over Monte Carlo trials.
Fig. 10 indicates that HMRTC has a high probability to achieve accurate factor vectors, as long as the sampling ratio is sufficiently large, though it is not theoretically guaranteed.
ViD Frequency estimation
Although the focus of this work lies in reconstructing a full ND exponential signal from partial measurements, it is still important to estimate frequency in some applications. Here, we adopt the method in [58] to estimate the frequency and root mean square error (RMSE) to quantify the accuracy of estimation. The RMSE is defined as
where denotes the estimated frequency of the fully sampled signal, the estimated frequency of the reconstructed signal, and the number of exponentials. Table II indicates that the proposed HMRTC obtains the best frequency estimation, comparing with ADMTR and WCP.
# Peaks 

Frequency errors ()  

ADMTR  WCP  Proposed  
1  (0.12201, 0.45297, 0.31506)  (Failed,108, 338)  (9, 1, 5)  (3,2,2)  
2  (0.23503, 0.54308, 0.38204)  (Failed, 2913, 579)  (1, 5, 20)  (1,1,1)  
3  (0.26608, 0.57491, 0.44104)  (Failed, 7207, 3499)  (2, 2, 1)  (2,0,1)  
4  (0.31721, 0.64294, 0.47386)  (Failed, 8213 ,7507)  (6, 14, 2)  (1,0,0)  
5  (0.38207, 0.72565, 0.55160)  (252, 1471, 3559)  (5, 0, 1)  (2,2,3)  
6  (0.41720, 0.74228, 0.58575)  (Failed, 7016, 418)  (2, 4, 0)  (2,1,0)  
7  (0.44421, 0.81311, 0.62796)  (Failed, 3299, 6558)  (0, 4, 3)  (1,1,0)  
8  (0.56599, 0.84617, 0.69391)  (Failed, 5922, 4492)  (3, 0, 3)  (0,1,2)  
9  (0.62214, 0.90486, 0.73689)  (235, 3533, 6835)  (6, 1,3)  (0,0,1)  
10  (0.69392, 0.94303, 0.81405)  (51, 105, 388)  (2, 0, 3)  (1,0,1) 

Note: The signal reconstruction error, RLNE, of ADMTR, WCP and the proposed method are 0.9321, 0.0373 and 0.0105, respectively. The Failed means that too many pseudo peaks are presented thus the frequency cannot be estimated. The noisy D signal with known frequencies in Fig. 3 is used for simulation. The number of peaks is 10 and the sampling rate is 6%.
ViE Regularization parameter
The optimal in HMRTC, producing the lowest reconstruction error, generally decreases as the noise level increases. For example, as shown in Fig. 11(a), the optimal is , and when the noise level is , and , respectively. This trend means that a smaller
Comments
There are no comments yet.