# Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

In this paper, we address the problem of parameter estimation of a 2-D chirp model under the assumption that the errors are stationary. We extend the 2-D periodogram method for the sinusoidal model, to find initial values to use in any iterative procedure to compute the least squares estimators (LSEs) of the unknown parameters, to the 2-D chirp model. Next we propose an estimator, known as the approximate least squares estimator (ALSE), that is obtained by maximising a periodogram-type function and is observed to be asymptotically equivalent to the LSE. Moreover the asymptotic properties of these estimators are obtained under slightly mild conditions than those required for the LSEs. For the multiple component 2-D chirp model, we propose a sequential method of estimation of the ALSEs, that significantly reduces the computational difficulty involved in reckoning the LSEs and the ALSEs. We perform some simulation studies to see how the proposed method works and a data set has been analysed for illustrative purposes.

There are no comments yet.

## Authors

• 4 publications
• 18 publications
• 4 publications
• ### On approximate least squares estimators of parameters on one-dimensional chirp signal

Chirp signals are quite common in many natural and man-made systems like...
04/04/2018 ∙ by Rhythm Grover, et al. ∙ 0

• ### An efficient methodology to estimate the parameters of a two-dimensional chirp signal model

In various capacities of statistical signal processing two-dimensional (...
02/13/2019 ∙ by Rhythm Grover, et al. ∙ 0

• ### Chirp-like model and its parameter estimation

We propose a chirp-like signal model as an alternative to a chirp model ...
05/16/2018 ∙ by Rhythm Grover, et al. ∙ 0

• ### A comparative study of estimation methods in quantum tomography

As quantum tomography is becoming a key component of the quantum enginee...
01/23/2019 ∙ by Anirudh Acharya, et al. ∙ 0

• ### Estimating the fundamental frequency using modified Newton-Raphson algorithm

In this paper, we propose a modified Newton-Raphson algorithm to estimat...
11/22/2018 ∙ by Swagata Nandi, et al. ∙ 0

• ### A Two-Stage Batch Algorithm for Nonlinear Static Parameter Estimation

A two-stage batch estimation algorithm for solving a class of nonlinear,...
01/03/2020 ∙ by Kerry Sun, et al. ∙ 0

• ### The Goldenshluger-Lepski Method for Constrained Least-Squares Estimators over RKHSs

We study an adaptive estimation procedure called the Goldenshluger-Lepsk...
11/02/2018 ∙ by Stephen Page, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A two dimensional chirp signal model is expressed mathematically as follows:

 y(m,n)=A0cos(α0m+β0m2+γ0n+δ0n2)+B0sin(α0m+β0m2+γ0n+δ0n2)+X(m,n);m=1,⋯,M; n=1,⋯,N. (1)

Here s are the signal observations, , are real valued, non-zero amplitudes and and are the frequencies and the frequency rates

, respectively. The random variables

is a sequence of stationary errors. The explicit assumptions on the error structure are provided in section 2.

The above model has been considered in many areas of image processing, particularly in modeling gray images. Several estimation techniques for the unknown parameters of this model have been considered by different authors, for instance, Friedlander and Francos [7], Francos and Friedlander [5], [6], Lahiri [10], [11] and the references cited therein.

Our goal is to estimate the unknown parameters of the above model, primarily the non-linear parameters, the frequencies , and the frequency rates , , under certain suitable assumptions. One of the straightforward and efficient ways to do so is to use the least squares estimation method. But since the least squares surface is highly non-linear and iterative methods must be employed for their computation, for these methods to work, we need good starting points for the unknown parameters.

One of the fundamental models in statistical signal processing literature, among the 2-D models, is the 2-D sinusoidal model. This model has different applications in many fields such as biomedical spectral analysis, geophysical perception etc. For references, see Barbieri and Barone [2], Cabrera and Bose [3], Hua [8] , Zhang and Mandrekar [17], Prasad et al, [13], Nandi et al, [12] and Kundu and Nandi [9].

A 2-D sinusoidal model has the following mathematical expression:

 y(m,n)=A0cos(mλ0+nμ0)+B0sin(mλ0+nμ0)+X(m,n) m=1,⋯,M; n=1,⋯,N.

For this model as well, the least squares surface is highly non-linear and thus we need good initial values, for any iterative procedure to work. One of the most prevalent methods to find the initial guesses for the 2-D sinusoidal model are the periodogram estimators. These are obtained by maximizing a 2-D periodogram function, which is defined as follows:

 I(λ,μ)=2MN∣∣∣M∑m=1N∑n=1y(m,n)e−i(mλ+nμ)∣∣∣2

This periodogram function is maximized over 2-D Fourier frequencies, that is, at , for , and . The estimators that are obtained by maximising the above periodogram function with respect to and simultaneously over the continuous space , are known as the approximate least squares estimators (ALSEs). Kundu and Nandi, [9] proved that the ALSEs are consistent and asymptotically equivalent to the least squares estimators (LSEs).

Analogously, we define a periodogram-type function for the 2-D chirp model defined in equation (1), as follows:

 I(α,β,γ,δ)=2MN∣∣∣M∑m=1N∑n=1y(m,n)e−i(αm+βm2+γn+δn2)∣∣∣2. (2)

To find the initial values, we propose to maximise the above function at the grid points , , , , and , corresponding to the Fourier frequencies of the 2-D sinusoidal model. These starting values can be used in any iterative procedure, to compute the LSEs and ALSEs.

Next we propose to estimate the unknown parameters of model (1) by approximate least squares estimation method. In this method, we maximize the periodogram-like function defined above, with respect to , , and simultaneously, over . The details on the methodology to obtain the ALSEs are further explained in section 3

. We prove that these estimators are strongly consistent and asymptotically normally distributed under the assumptions, that are slightly mild than those required for the LSEs. Also, the convergence rates of the ALSEs are same as those of the LSEs.

The rest of the paper is organized as follows. In the next section we state the model assumptions, some notations and some preliminary results required. In section 3, we give a brief description of the methodology. In section 4, we study the asymptotic properties of one component 2-D chirp model and in section 5, we propose a sequential method to obtain the LSEs and ALSEs for the multicomponent 2-D chirp model and study their asymptotic properties. Numerical experiments and a simulated data analysis are illustrated in sections 6 and 7. In section 8, we conclude the paper. All the proofs are provided in the appendices.

## 2 Model Assumptions, Notations and Preliminary Results

Assumption 1. The error is stationary with the following form:

 X(m,n)=∞∑j=−∞∞∑k=−∞a(j,k)ϵ(m−j,n−k),

where

is a double array sequence of independently and identically distributed (i.i.d.) random variables with mean zero, variance

and finite fourth moment, and

s are real constants such that

 ∞∑j=−∞∞∑k=−∞|a(j,k)|<∞.

We will use the following notation:

, the parameter vector,

, the true parameter vector, , the parameter space. Also, , a vector of the non-linear parameters. Assumption 2. The true parameter vector is an interior point of .

Note that the assumptions required to prove strong consistency of the LSEs of the unknown parameters in this case are slightly different from those required to prove the consistency of ALSEs. For the LSEs the parametric space for the linear parameters has to be bounded, though here we do not require that bound. For details on the assumptions for the consistency of the LSEs, see Lahiri [10].

We need the following results to proceed further:

###### Lemma 1.

If , then except for a countable number of points, and for s, t = 0, 1, , the following are true:

1. [label=()]

###### Proof.

Refer to Lahiri [10]

###### Lemma 2.

If , then except for a countable number of points, the following holds true:

 limn→∞1nk√nn∑t=1tkcos(ωt+ψt2)=limn→∞1nk√nn∑t=1tksin(ωt+ψt2)=0; k=0,1,2,⋯
###### Proof.

Refer to Lahiri [10].

###### Lemma 3.

If and , then except for a countable number of points, and for s, t = 0, 1, , the following are true:

1. [label=()]

See Appendix D.

## 3 Method to obtain ALSEs

Consider the periodogram-like function defined in (2). In matrix notation, it can be written as:

 I(ϑ)=2MNYTW(ϑ)W(ϑ)TY.

Here, is the observed data vector, and

 W(ϑ)MN×2=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣cos(α+β+γ+δ)sin(α+β+γ+δ)cos(2α+4β+γ+δ)sin(2α+4β+γ+δ)⋮⋮cos(Mα+M2β+γ+δ)sin(Mα+M2β+γ+δ)⋮⋮cos(α+β+Nγ+N2δ)sin(α+β+Nγ+N2δ)cos(2α+4β+Nγ+N2δ)sin(2α+4β+Nγ+N2δ)⋮⋮cos(Mα+M2β+Nγ+N2δ)sin(Mα+M2β+Nγ+N2δ)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

In matrix notation, equation (1), can be written as:

 Y=W(ϑ)ϕ+X,

where is the error vector, and . The estimators obtained by maximising the function are known as the approximate least squares estimators (ALSEs). We will show that the estimators obtained by maximising are asymptotically equivalent to the estimators obtained by minimising the error sum of squares function, that is the LSEs, and hence the former are termed as the ALSEs. To do so, we require the following lemma:

###### Lemma 4.

For , except for a countable number of points, we have the following result:

 1MNW(ϑ)TW(ϑ)→[1/2001/2].
###### Proof.

Consider the following:

where,

 Ω11=N∑n=1M∑m=1cos2(αm+βm2+γn+δn2), Ω12=N∑n=1M∑m=1cos(αm+βm2+γn+δn2)sin(αm+βm2+γn+δn2), Ω21=N∑n=1M∑m=1cos(αm+βm2+γn+δn2)sin(αm+βm2+γn+δn2), Ω22=N∑n=1M∑m=1sin2(αm+βm2+γn+δn2).

Now using Lemma (c), (e) and (f), it can be easily seen that the matrix on the right hand side of the above equation tends to , except for a countable number of points and hence the result.

We know that to find the LSEs, we minimise the following error sum of squares:

 Q(θ)=(Y−W(ϑ)ϕ)T(Y−W(ϑ)ϕ) (3)

with respect to . If we fix , then the estimates of the linear parameters can be obtained by separable regression technique of Richards [15] by minimizing with respect to and . Thus the estimate of is given by:

 ^ϕ(ϑ)=[^A(ϑ)^B(ϑ)]=(W(ϑ)TW(ϑ))−1W(ϑ)TY. (4)

Substituting and in (3), we have:

 Q(^A(ϑ),^B(ϑ),ϑ)=YT(I−W(ϑ)(W(ϑ)TW(ϑ))−1W(ϑ)T)Y.

Using Lemma 5, we have the following relationship between the function and the periodogram-like function :

 1MNQ(^A(ϑ),^B(ϑ),ϑ)=1MNYTY−I(ϑ)+o(1).

Here, a function is , if 0 as min{, } Thus, that minimises is equivalent to , which maximises .

## 4 Asymptotic Properties of ALSEs

In this section, we study the asymptotic properties of the proposed estimators, the ALSEs of model (1). The following theorem states the result on the consistency property of the ALSEs.

###### Theorem 1.

If the assumptions 1 and 2 are satisfied, then , the ALSE of , is a strongly consistent estimator of , that is, as .

###### Proof.

See Appendix A.

In the following theorem, we state the result obtained on the asymptotic distribution of the proposed estimators.

###### Theorem 2.

If the assumptions 1 and 2 are true, then the asymptotic distribution of is same as that of as , where = is the ALSE of and = is the LSE of and is a 6 6 diagonal matrix defined as:
=

See Appendix B.

## 5 Multiple Component 2-D Chirp Model

In this section, we consider a 2-D chirp model with multiple components, mathematically expressed in the following form:

 y(m,n)=p∑k=1(A0kcos(α0km+β0km2+γ0kn+δ0kn2)+B0ksin(α0km+β0km2+γ0kn+δ0kn2))+X(m,n);m=1,⋯,M; n=1,⋯,N. (5)

Here is the observed data vector, s, s are the amplitudes, s, s are the frequencies and the s, s are the frequency rates. The random variables sequence is a stationary error sequence. In practice, the number of components, is unknown and its estimation is an important and still an open problem. For recent references on this model, see Zhang et al. [18] and Lahiri [10].

Here it is assumed that is known and our main purpose is to estimate the unknown parameters of this model, primarily the non-linear parameters. Finding the ALSEs for the above model is computationally challenging, especially when the number of components, is large. Even when , we need to solve a 12-D optimisation problem to obtain the ALSEs. Thus, we propose a sequential procedure to find these estimates. This method reduces the complexity of computation without compromising on the efficiency of the estimators. We prove that the ALSEs obtained by the proposed sequential procedure are strongly consistent and have the same rates of convergence as the LSEs.

In the following subsection, we provide the algorithm to obtain the sequential ALSEs of the unknown parameters of the component 2-D chirp signal. Let us denote .

### 5.1 Algorithm to find the ALSEs:

Step 1: Maximizing the periodogram-like function

 I1(ϑ)=1MN(M∑m=1N∑n=1y(m,n)cos(αm+βm2+γn+δn2))2+1MN(M∑m=1N∑n=1y(m,n)sin(αm+βm2+γn+δn2))2. (6)

We first obtain the non-linear parameter estimates: . Then the linear parameter estimates can be obtained by substituting in (4). Thus

 ~A1=2MNN∑n=1M∑m=1y(m,n)cos(~α1m+~β1m2+~γ1n+~δ1n2),~B1=2MNN∑n=1M∑m=1y(m,n)sin(~α1m+~β1m2+~γ1n+~δ1n2). (7)

Step 2: Now we have the estimates of the parameters of the first component of the observed signal. We subtract the contribution of the first component from the original signal vector to eliminate the effect of the first component and obtain a new data vector, say

 Y1=Y−W(~ϑ1)(~A1~B1).

Step 3: Now we compute by maximizing which is obtained by replacing the original data vector by the new data vector in (6) and then the linear parameters, and can be obtained by substituting in (4).

Step 4: Continue the process upto -steps.

### 5.2 Asymptotic Properties

Further assumptions required to study the consistency property and derive the asymptotic distribution of the proposed estimators, are stated as follows: Assumption 3. is an interior point of , for all and the frequencies , and the frequency rates , are such that .

Assumption 4. s and s satisfy the following relationship:

 ∞>A012+B012>A022+B022>⋯>A0p2+B0p2>0.

In the following theorems, we state the results we obtained on the consistency of the proposed estimators.

###### Theorem 3.

Under the assumptions 1, 3 and 4, , , and are strongly consistent estimators of , , , respectively, that is, as .

See Appendix C.

###### Theorem 4.

If the assumptions 1, 3 and 4 are satisfied and p 2,then as .

###### Proof.

See Appendix C.

The result obtained in the above theorem can be extended upto the -th step. Thus for any , the ALSEs obtained at the -th step are strongly consistent.

###### Theorem 5.

If the assumptions 1, 3 and 4 are satisfied, and if , , , , and are the estimators obtained at the -th step, and k p then 0 and 0 as .

###### Proof.

See Appendix C.

Next we derive the asymptotic distribution of the proposed estimators. In the following theorem, we state the results on the distribution of the sequential ALSEs.

###### Theorem 6.

If the assumptions, 1, 3 and 4 are satisfied, then

 (~θ1−θ01)D−1d→N6(0,σ2cΣ−11)

where is the diagonal matrix as defined in Theorem 2 and

###### Proof.

See Appendix D. ∎

The above result holds true for all and is stated in the following theorem.

###### Theorem 7.

If the assumptions, 1, 3 and 4 are satisfied, then

 (~θk−θ0k)D−1d→N6(0,σ2cΣ−1k),

where can be obtained by replacing by and by in defined above.

###### Proof.

This proof can be obtained by proceeding exactly in the same manner as in the proof of Theorem 6.

## 6 Simulation Studies

### 6.1 Simulation results for the one component model

We perform numerical simulations on model (1) with the following parameters:

 A0=2, B0=3, α0=1.5, β0=0.5, γ0=2.5 \textmdand δ0=0.75.

The following error structures are used to generate the data:

 1.X(m,n)=ϵ(m,n). (8) 2.X(m,n)=ϵ(m,n)+0.5ϵ(m,n−1)+0.4ϵ(m−1,n)+0.3ϵ(m−1,n−1). (9)

Here . For simulations we consider different values of and different values of and as can be seen in the tables. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 1000 times each and averages, biases and MSEs are reported. We also compute the asymptotic variances to compare with the corresponding MSEs. From the tables above, it is observed that as the error variance increases, the MSEs also increase for both the LSEs and the ALSEs. As the sample size increases, one can see that the estimates become closer to the corresponding true values, that is, the biases become small. Also, the MSEs decrease as the sample size, and increase, and the order of the MSEs of both the estimators is almost equivalent to the order of the asymptotic variances. Hence, one may conclude that they are well matched. The MSEs of the ALSEs get close to those of LSEs as and increase and hence to the theoretical asymptotic variances of the LSEs, showing that they are asymptotically equivalent.

### 6.2 Simulation results for the multiple component model with p=2

Next we conduct numerical simulations on model (5) with and the following parameters:

 A01=5, B01=4, α01=2.1, β01=0.1, γ01=1.25 \textmdand δ01=0.25.
 A02=3, B02=2, α02=1.5, β02=0.5, γ02=1.75 \textmdand δ02=0.75

The error structures used to generate the data are same as that used for the one component model, see equations, (8) and (9). For simulations we consider different values of and different values of and , again same as that for the one component model. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 1000 times each and averages, biases, MSEs and asymptotic variances are computed. The results are reported in the following tables. From the tables, it can be seen that the estimates, both the ALSEs and the LSEs are quite close to their true values. It is observed that the estimates of the second component are better than those of the first component, in the sense that their biases and MSEs are smaller and the MSEs are better matched with the corresponding asymptotic variances. For both the estimators, as the sample size increases, the MSEs and the biases of the estimates of both components, decrease thus showing consistency.