Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

07/24/2018 ∙ by Rhythm Grover, et al. ∙ 0

In this paper, we address the problem of parameter estimation of a 2-D chirp model under the assumption that the errors are stationary. We extend the 2-D periodogram method for the sinusoidal model, to find initial values to use in any iterative procedure to compute the least squares estimators (LSEs) of the unknown parameters, to the 2-D chirp model. Next we propose an estimator, known as the approximate least squares estimator (ALSE), that is obtained by maximising a periodogram-type function and is observed to be asymptotically equivalent to the LSE. Moreover the asymptotic properties of these estimators are obtained under slightly mild conditions than those required for the LSEs. For the multiple component 2-D chirp model, we propose a sequential method of estimation of the ALSEs, that significantly reduces the computational difficulty involved in reckoning the LSEs and the ALSEs. We perform some simulation studies to see how the proposed method works and a data set has been analysed for illustrative purposes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 27

page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A two dimensional chirp signal model is expressed mathematically as follows:

(1)

Here s are the signal observations, , are real valued, non-zero amplitudes and and are the frequencies and the frequency rates

, respectively. The random variables

is a sequence of stationary errors. The explicit assumptions on the error structure are provided in section 2.

The above model has been considered in many areas of image processing, particularly in modeling gray images. Several estimation techniques for the unknown parameters of this model have been considered by different authors, for instance, Friedlander and Francos [7], Francos and Friedlander [5], [6], Lahiri [10], [11] and the references cited therein.

Our goal is to estimate the unknown parameters of the above model, primarily the non-linear parameters, the frequencies , and the frequency rates , , under certain suitable assumptions. One of the straightforward and efficient ways to do so is to use the least squares estimation method. But since the least squares surface is highly non-linear and iterative methods must be employed for their computation, for these methods to work, we need good starting points for the unknown parameters.

One of the fundamental models in statistical signal processing literature, among the 2-D models, is the 2-D sinusoidal model. This model has different applications in many fields such as biomedical spectral analysis, geophysical perception etc. For references, see Barbieri and Barone [2], Cabrera and Bose [3], Hua [8] , Zhang and Mandrekar [17], Prasad et al, [13], Nandi et al, [12] and Kundu and Nandi [9].

A 2-D sinusoidal model has the following mathematical expression:

For this model as well, the least squares surface is highly non-linear and thus we need good initial values, for any iterative procedure to work. One of the most prevalent methods to find the initial guesses for the 2-D sinusoidal model are the periodogram estimators. These are obtained by maximizing a 2-D periodogram function, which is defined as follows:

This periodogram function is maximized over 2-D Fourier frequencies, that is, at , for , and . The estimators that are obtained by maximising the above periodogram function with respect to and simultaneously over the continuous space , are known as the approximate least squares estimators (ALSEs). Kundu and Nandi, [9] proved that the ALSEs are consistent and asymptotically equivalent to the least squares estimators (LSEs).

Analogously, we define a periodogram-type function for the 2-D chirp model defined in equation (1), as follows:

(2)

To find the initial values, we propose to maximise the above function at the grid points , , , , and , corresponding to the Fourier frequencies of the 2-D sinusoidal model. These starting values can be used in any iterative procedure, to compute the LSEs and ALSEs.

Next we propose to estimate the unknown parameters of model (1) by approximate least squares estimation method. In this method, we maximize the periodogram-like function defined above, with respect to , , and simultaneously, over . The details on the methodology to obtain the ALSEs are further explained in section 3

. We prove that these estimators are strongly consistent and asymptotically normally distributed under the assumptions, that are slightly mild than those required for the LSEs. Also, the convergence rates of the ALSEs are same as those of the LSEs.

The rest of the paper is organized as follows. In the next section we state the model assumptions, some notations and some preliminary results required. In section 3, we give a brief description of the methodology. In section 4, we study the asymptotic properties of one component 2-D chirp model and in section 5, we propose a sequential method to obtain the LSEs and ALSEs for the multicomponent 2-D chirp model and study their asymptotic properties. Numerical experiments and a simulated data analysis are illustrated in sections 6 and 7. In section 8, we conclude the paper. All the proofs are provided in the appendices.

2 Model Assumptions, Notations and Preliminary Results

Assumption 1. The error is stationary with the following form:

where

is a double array sequence of independently and identically distributed (i.i.d.) random variables with mean zero, variance

and finite fourth moment, and

s are real constants such that

We will use the following notation:

, the parameter vector,

, the true parameter vector, , the parameter space. Also, , a vector of the non-linear parameters. Assumption 2. The true parameter vector is an interior point of .

Note that the assumptions required to prove strong consistency of the LSEs of the unknown parameters in this case are slightly different from those required to prove the consistency of ALSEs. For the LSEs the parametric space for the linear parameters has to be bounded, though here we do not require that bound. For details on the assumptions for the consistency of the LSEs, see Lahiri [10].

We need the following results to proceed further:

Lemma 1.

If , then except for a countable number of points, and for s, t = 0, 1, , the following are true:

  1. [label=()]







Proof.

Refer to Lahiri [10]

Lemma 2.

If , then except for a countable number of points, the following holds true:

Proof.

Refer to Lahiri [10].

Lemma 3.

If and , then except for a countable number of points, and for s, t = 0, 1, , the following are true:

  1. [label=()]

Proof.

See Appendix D.

3 Method to obtain ALSEs

Consider the periodogram-like function defined in (2). In matrix notation, it can be written as:

Here, is the observed data vector, and

In matrix notation, equation (1), can be written as:

where is the error vector, and . The estimators obtained by maximising the function are known as the approximate least squares estimators (ALSEs). We will show that the estimators obtained by maximising are asymptotically equivalent to the estimators obtained by minimising the error sum of squares function, that is the LSEs, and hence the former are termed as the ALSEs. To do so, we require the following lemma:

Lemma 4.

For , except for a countable number of points, we have the following result:

Proof.

Consider the following:

where,

Now using Lemma (c), (e) and (f), it can be easily seen that the matrix on the right hand side of the above equation tends to , except for a countable number of points and hence the result.

We know that to find the LSEs, we minimise the following error sum of squares:

(3)

with respect to . If we fix , then the estimates of the linear parameters can be obtained by separable regression technique of Richards [15] by minimizing with respect to and . Thus the estimate of is given by:

(4)

Substituting and in (3), we have:

Using Lemma 5, we have the following relationship between the function and the periodogram-like function :

Here, a function is , if 0 as min{, } Thus, that minimises is equivalent to , which maximises .

4 Asymptotic Properties of ALSEs

In this section, we study the asymptotic properties of the proposed estimators, the ALSEs of model (1). The following theorem states the result on the consistency property of the ALSEs.

Theorem 1.

If the assumptions 1 and 2 are satisfied, then , the ALSE of , is a strongly consistent estimator of , that is, as .

Proof.

See Appendix A.

In the following theorem, we state the result obtained on the asymptotic distribution of the proposed estimators.

Theorem 2.

If the assumptions 1 and 2 are true, then the asymptotic distribution of is same as that of as , where = is the ALSE of and = is the LSE of and is a 6 6 diagonal matrix defined as:
=

Proof.

See Appendix B.

5 Multiple Component 2-D Chirp Model

In this section, we consider a 2-D chirp model with multiple components, mathematically expressed in the following form:

(5)

Here is the observed data vector, s, s are the amplitudes, s, s are the frequencies and the s, s are the frequency rates. The random variables sequence is a stationary error sequence. In practice, the number of components, is unknown and its estimation is an important and still an open problem. For recent references on this model, see Zhang et al. [18] and Lahiri [10].

Here it is assumed that is known and our main purpose is to estimate the unknown parameters of this model, primarily the non-linear parameters. Finding the ALSEs for the above model is computationally challenging, especially when the number of components, is large. Even when , we need to solve a 12-D optimisation problem to obtain the ALSEs. Thus, we propose a sequential procedure to find these estimates. This method reduces the complexity of computation without compromising on the efficiency of the estimators. We prove that the ALSEs obtained by the proposed sequential procedure are strongly consistent and have the same rates of convergence as the LSEs.

In the following subsection, we provide the algorithm to obtain the sequential ALSEs of the unknown parameters of the component 2-D chirp signal. Let us denote .

5.1 Algorithm to find the ALSEs:

Step 1: Maximizing the periodogram-like function

(6)

We first obtain the non-linear parameter estimates: . Then the linear parameter estimates can be obtained by substituting in (4). Thus

(7)

Step 2: Now we have the estimates of the parameters of the first component of the observed signal. We subtract the contribution of the first component from the original signal vector to eliminate the effect of the first component and obtain a new data vector, say

Step 3: Now we compute by maximizing which is obtained by replacing the original data vector by the new data vector in (6) and then the linear parameters, and can be obtained by substituting in (4).

Step 4: Continue the process upto -steps.

5.2 Asymptotic Properties

Further assumptions required to study the consistency property and derive the asymptotic distribution of the proposed estimators, are stated as follows: Assumption 3. is an interior point of , for all and the frequencies , and the frequency rates , are such that .

Assumption 4. s and s satisfy the following relationship:

In the following theorems, we state the results we obtained on the consistency of the proposed estimators.

Theorem 3.

Under the assumptions 1, 3 and 4, , , and are strongly consistent estimators of , , , respectively, that is, as .

Proof.

See Appendix C.

Theorem 4.

If the assumptions 1, 3 and 4 are satisfied and p 2,then as .

Proof.

See Appendix C.

The result obtained in the above theorem can be extended upto the -th step. Thus for any , the ALSEs obtained at the -th step are strongly consistent.

Theorem 5.

If the assumptions 1, 3 and 4 are satisfied, and if , , , , and are the estimators obtained at the -th step, and k p then 0 and 0 as .

Proof.

See Appendix C.

Next we derive the asymptotic distribution of the proposed estimators. In the following theorem, we state the results on the distribution of the sequential ALSEs.

Theorem 6.

If the assumptions, 1, 3 and 4 are satisfied, then

where is the diagonal matrix as defined in Theorem 2 and

Proof.

See Appendix D. ∎

The above result holds true for all and is stated in the following theorem.

Theorem 7.

If the assumptions, 1, 3 and 4 are satisfied, then

where can be obtained by replacing by and by in defined above.

Proof.

This proof can be obtained by proceeding exactly in the same manner as in the proof of Theorem 6.

6 Simulation Studies

6.1 Simulation results for the one component model

We perform numerical simulations on model (1) with the following parameters:

The following error structures are used to generate the data:

(8)
(9)

Here . For simulations we consider different values of and different values of and as can be seen in the tables. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 1000 times each and averages, biases and MSEs are reported. We also compute the asymptotic variances to compare with the corresponding MSEs. From the tables above, it is observed that as the error variance increases, the MSEs also increase for both the LSEs and the ALSEs. As the sample size increases, one can see that the estimates become closer to the corresponding true values, that is, the biases become small. Also, the MSEs decrease as the sample size, and increase, and the order of the MSEs of both the estimators is almost equivalent to the order of the asymptotic variances. Hence, one may conclude that they are well matched. The MSEs of the ALSEs get close to those of LSEs as and increase and hence to the theoretical asymptotic variances of the LSEs, showing that they are asymptotically equivalent.

Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.4910 0.5005 2.5194 0.7492 1.5000 0.4999 2.5000 0.7499
Bias -0.0090 0.0005 0.0194 -0.0008 3.85E-05 -1.66E-06 1.22E-05 -3.97E-07
MSE 8.21E-05 2.83E-07 3.79E-04 5.62E-07 8.29E-07 1.13E-09 7.18E-07 9.90E-10
AVar 7.56E-07 1.13E-09 7.56E-07 1.13E-09 7.56E-07 1.13E-09 7.56E-07 1.13E-09
ALSEs LSEs
0.5 Avg 1.4912 0.5005 2.5196 0.7492 1.5000 0.5000 2.5003 0.7499
Bias -0.0088 0.0005 0.0196 -0.0008 3.01E-05 1.29E-06 0.0003 -9.60E-06
MSE 9.78E-05 3.08E-07 4.10E-04 6.03E-07 2.03E-05 2.76E-08 2.10E-05 2.96E-08
AVar 1.89E-05 2.48E-08 1.89E-05 2.48E-08 1.89E-05 2.48E-08 1.89E-05 2.48E-08
ALSEs LSEs
1 Avg 1.4911 0.5005 2.5184 0.7492 1.5001 0.4999 2.4992 0.7500
Bias -0.0089 0.0005 0.0184 -0.0008 0.0001 -1.15E-06 -0.0007 2.44E-05
MSE 1.52E-04 3.87E-07 4.21E-04 6.23E-07 8.64E-05 1.18E-07 7.82E-05 1.09E-07
AVar 7.56E-05 1.13E-07 7.56E-05 1.13E-07 7.56E-05 1.13E-07 7.56E-05 1.13E-07
Table 1: Estimates of the parameters of model (1) when errors are i.i.d. Gaussian random variables as defined in (8) and M = N = 25
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.5039 0.4999 2.4997 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0039 -9.22E-05 -0.0002 1.01E-05 5.12E-06 -6.96E-08 2.97E-06 -2.76E-08
MSE 1.95E-05 9.75E-09 3.22E-07 2.03E-10 2.73E-08 1.04E-11 3.07E-08 1.14E-11
AVar 4.73E-08 1.77E-11 4.73E-08 1.77E-11 4.73E-08 1.77E-11 4.73E-08 1.77E-11
ALSEs LSEs
0.5 Avg 1.5041 0.4999 2.4997 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0041 -9.53E-05 -0.0002 9.99E-06 2.34E-05 -3.01E-07 9.23E-06 -2.12E-07
MSE 2.11E-05 1.04E-08 1.67E-06 6.70E-10 9.53E-07 3.44E-10 8.90E-07 3.33E-10
AVar 1.18E-06 4.43E-10 1.18E-06 4.43E-10 1.18E-06 4.43E-10 1.18E-06 4.43E-10
ALSEs LSEs
1 Avg 1.504 0.4999 2.4997 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0040 -9.40E-05 -0.0002 1.01E-05 8.76E-05 -1.28E-06 1.66E-05 -9.05E-08
MSE 2.42E-05 1.14E-08 5.00E-06 1.87E-09 4.24E-06 1.53E-09 4.01E-06 1.45E-09
AVar 4.73E-06 1.77E-09 4.73E-06 1.77E-09 4.73E-06 1.77E-09 4.73E-06 1.77E-09
Table 2: Estimates of the parameters of model (1) when errors are i.i.d. Gaussian random variables as defined in (8) and M = N = 50
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.5005 0.4999 2.4997 0.7500 1.4999 0.5 2.5000 0.7499
Bias 0.0005 -7.67E-06 -0.0003 2.02E-06 -4.78E-07 3.90E-09 3.55E-07 -5.54E-09
MSE 3.15E-07 6.20E-11 1.69E-07 1.68E-11 8.29E-09 1.33E-12 7.29E-10 2.04E-13
AVar 9.34E-09 1.56E-12 9.34E-09 1.56E-12 9.34E-09 1.56E-12 9.34E-09 1.56E-12
ALSEs LSEs
0.5 Avg 1.5003 0.4999 2.4996 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0003 -5.85E-06 -0.0004 2.40E-06 4.90E-06 -1.40E-07 4.41E06 -1.60E-08
MSE 3.80E-07 7.20E-11 3.14E-07 4.12E-11 1.55E-07 2.62E-11 1.07E-07 1.88E-11
AVar 2.33E-07 3.89E-11 2.33E-07 3.89E-11 2.33E-07 3.89E-11 2.33E-07 3.89E-11
ALSEs LSEs
1 Avg 1.5004 0.4999 2.4995 0.7500 1.5000 0.4999 2.4999 0.7500
Bias 0.0004 -6.70E-06 -0.0005 3.89E-06 4.90E-05 -6.11E-07 -1.45E-05 5.86E-08
MSE 1.01E-06 1.73E-10 9.37E-07 1.38E-10 7.11E-07 1.17E-10 5.98E-07 9.97E-11
AVar 9.34E-07 1.56E-10 9.34E-07 1.56E-10 9.34E-07 1.56E-10 9.34E-07 1.56E-10
Table 3: Estimates of the parameters of model (1) when errors are i.i.d. Gaussian random variables as defined in (8) and M = N = 75
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.4999 0.5000 2.4999 0.7500 1.5000 0.4999 2.5000 0.7500
Bias -4.19E-05 1.99E-06 -4.30E-05 3.65E-08 2.55E-06 -3.13E-08 3.12E-07 -5.88E-10
MSE 1.60E-08 5.19E-12 2.18E-08 1.78E-12 5.38E-10 5.93E-14 7.86E-10 8.54E-14
AVar 2.95E-09 2.77E-13 2.95E-09 2.77E-13 2.95E-09 2.77E-13 2.95E-09 2.77E-13
ALSEs LSEs
0.5 Avg 1.4998 0.5000 2.4998 0.7500 1.5000 0.4999 2.5000 0.7500
Bias -0.0002 2.77E-06 -0.0002 7.46E-07 4.96E-06 -4.93E-08 1.34E-06 2.74E-08
MSE 8.14E-08 1.38E-11 9.44E-08 8.14E-12 3.83E-08 3.75E-12 3.64E-08 3.66E-12
AVar 7.38E-08 6.92E-12 7.38E-08 6.92E-12 7.38E-08 6.92E-12 7.38E-08 6.92E-12
ALSEs LSEs
1 Avg 1.4997 0.5000 2.4997 0.7500 1.5000 0.4999 2.5000 0.7499
Bias -0.0003 3.60E-06 -0.0002 1.43E-06 9.37E-07 -2.97E-08 2.10E-06 -8.23E-08
MSE 2.35E-07 3.09E-11 2.71E-07 2.35E-11 1.60E-07 1.57E-11 1.91E-07 1.79E-11
AVar 2.95E-07 2.77E-11 2.95E-07 2.77E-11 2.95E-07 2.77E-11 2.95E-07 2.77E-11
Table 4: Estimates of the parameters of model (1) when errors are i.i.d. Gaussian random variables as defined in (8) and M = N = 100
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.4911 0.5005 2.5193 0.7492 1.4999 0.5000 2.5000 0.7499
Bias -0.0089 0.0005 0.0193 -0.0008 -5.15E-05 1.81E-06 3.05E-05 -7.46E-07
MSE 8.28E-05 2.84E-07 3.78E-04 5.60E-07 1.13E-06 1.70E-09 1.12E-06 1.57E-09
AVar 1.13E-06 1.70E-09 1.13E-06 1.70E-09 1.13E-06 1.70E-09 1.13E-06 1.70E-09
ALSEs LSEs
0.5 Avg 1.4910 0.5005 2.5192 0.7492 1.4998 0.5000 2.5000 0.7500
Bias -0.0090 0.0005 0.0192 -0.0007 -0.0002 6.45E-06 2.31E-06 1.59E-06
MSE 1.09E-04 3.29E-07 4.03E-04 5.93E-07 3.13E-05 4.60E-08 2.87E-05 4.00E-08
AVar 2.84E-05 4.25E-08 2.84E-05 4.25E-08 2.84E-05 4.25E-08 2.84E-05 4.25E-08
ALSEs LSEs
1 Avg 1.4910 0.5005 2.5195 0.7492 1.4997 0.5000 2.5002 0.7499
Bias -0.0090 0.0005 0.0195 -0.0008 -0.0003 8.25E-06 0.0002 -6.10E-06
MSE 1.91E-04 4.57E-07 5.04E-04 7.30E-07 1.31E-04 1.94E-07 1.24E-04 1.77E-07
AVar 1.13E-04 1.70E-07 1.13E-04 1.70E-07 1.13E-04 1.70E-07 1.13E-04 1.70E-07
Table 5: Estimates of the parameters of model (1) when errors are stationary random variables as defined in (9) and M = N = 25
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.5039 0.4999 2.4997 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0039 -9.19E-05 -0.0003 1.05E-05 1.26E-05 -2.32E-07 3.69E-06 -7.66E-08
MSE 1.94E-05 9.71E-09 3.99E-07 2.32E-10 3.92E-08 1.54E-11 4.35E-08 1.60E-11
AVar 7.09E-08 2.66E-11 7.09E-08 2.66E-11 7.09E-08 2.66E-11 7.09E-08 2.66E-11
ALSEs LSEs
0.5 Avg 1.5042 0.4999 2.4998 0.7500 1.5000 0.4999 2.5000 0.7499
Bias 0.0042 -9.70E-05 -0.0002 8.66E-06 6.93E-05 -1.12E-06 4.43E-05 -1.17E-06
MSE 2.24E-05 1.10E-08 2.31E-06 9.16E-10 1.47E-06 5.55E-10 1.45E-06 5.63E-10
AVar 1.77E-06 6.65E-10 1.77E-06 6.65E-10 1.77E-06 6.65E-10 1.77E-06 6.65E-10
ALSEs LSEs
1 Avg 1.5041 0.4999 2.4998 0.7500 1.4999 0.5000 2.4999 0.7499
Bias 0.0041 -9.59E-05 -0.0002 8.50E-06 -3.56E-05 1.71E-07 -2.04E-05 -1.20E-07
MSE 2.60E-05 1.24E-08 7.63E-06 2.77E-09 6.11E-06 2.30E-09 6.68E-06 2.37E-09
AVar 7.09E-06 2.66E-09 7.09E-06 2.66E-09 7.09E-06 2.66E-09 7.09E-06 2.66E-09
Table 6: Estimates of the parameters of model (1) when errors are stationary random variables as defined in (9) and M = N = 50
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.5005 0.4999 2.4997 0.7500 1.4999 0.5000 2.5000 0.7499
Bias 0.0005 -7.49E-06 -0.0002 1.92E-06 -1.18E-06 1.58E-08 1.46E-06 -1.12E-08
MSE 3.11E-07 6.12E-11 1.68E-07 1.71E-11 1.21E-08 2.03E-12 9.24E-10 2.82E-13
AVar 1.40E-08 2.33E-12 1.40E-08 2.33E-12 1.40E-08 2.33E-12 1.40E-08 2.33E-12
ALSEs LSEs
0.5 Avg 1.5004 0.4999 2.4996 0.7500 1.5000 0.4999 2.5000 0.7500
Bias 0.0004 -6.10E-06 -0.0004 3.16E-06 5.48E-06 -8.28E-08 1.95E-06 4.45E-08
MSE 5.07E-07 9.31E-11 4.75E-07 6.37E-11 2.80E-07 2.80E-07 2.10E-07 3.49E-11
AVar 3.50E-07 5.83E-11 3.50E-07 5.83E-11 3.50E-07 5.83E-11 3.50E-07 5.83E-11
ALSEs LSEs
1 Avg 1.5004 0.4999 2.4995 0.7500 1.5000 0.4999 2.4999 0.7500
Bias 0.0004 -6.65E-06 -0.0005 4.30E-06 3.76E-05 -6.39E-07 -1.91E-05 1.19E-07
MSE 1.37E-06 2.39E-10 1.26E-06 1.90E-10 1.07E-06 1.80E-10 9.31E-07 1.58E-10
AVar 1.40E-06 2.33E-10 1.40E-06 2.33E-10 1.40E-06 2.33E-10 1.40E-06 2.33E-10
Table 7: Estimates of the parameters of model (1)when errors are stationary random variables as defined in (9) and M = N = 75
Parameters
True values 1.5 0.5 2.5 0.75 1.5 0.5 2.5 0.75
ALSEs LSEs
0.1 Avg 1.4999 0.5000 2.4999 0.7500 1.5000 0.4999 2.4999 0.7500
Bias -4.14E-05 1.98E-06 -4.85E-05 9.25E-08 3.60E-06 -3.84E-08 -9.42E-07 1.36E-08
MSE 1.68E-08 5.28E-12 2.5063E-08 2.02E-12 9.26E-10 1.07E-13 1.81E-09 1.82E-13
AVar 4.43E-09 4.15E-13 4.43E-09 4.15E-13 4.43E-09 4.15E-13 4.43E-09 4.15E-13
ALSEs LSEs
0.5 Avg 1.4998 0.5000 2.4998 0.7500 1.4999 0.5000 2.4999 0.7500
Bias -0.0002 3.21E-06 -0.0001 1.02E-06 -6.40E-06 3.26E-08 -4.78E-06 3.26E-08
MSE 1.36E-07 2.00E-11 1.36E-07 1.16E-11 6.31E-08 6.15E-12 6.12E-08 5.81E-12
AVar 1.11E-07 1.04E-11 1.11E-07 1.04E-11 1.11E-07 1.04E-11 1.11E-07 1.04E-11
ALSEs LSEs
1 Avg 1.4997 0.5000 2.4997 0.7500 1.4999 0.5000 2.5000 0.7499
Bias -0.0003 3.94E-06 -0.0003 1.60E-06 -2.75E-05 2.64E-07 6.40E-06 -4.03E-08
MSE 3.66E-07 4.48E-11 3.67E-07 3.29E-11 2.73E-07 2.67E-11 2.78E-07 2.67E-11
AVar 4.43E-07 4.15E-11 4.43E-07 4.15E-11 4.43E-07 4.15E-11 4.43E-07 4.15E-11
Table 8: Estimates of the parameters of model (1) when errors are stationary random variables as defined in (9) and M = N = 100

6.2 Simulation results for the multiple component model with

Next we conduct numerical simulations on model (5) with and the following parameters:

The error structures used to generate the data are same as that used for the one component model, see equations, (8) and (9). For simulations we consider different values of and different values of and , again same as that for the one component model. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 1000 times each and averages, biases, MSEs and asymptotic variances are computed. The results are reported in the following tables. From the tables, it can be seen that the estimates, both the ALSEs and the LSEs are quite close to their true values. It is observed that the estimates of the second component are better than those of the first component, in the sense that their biases and MSEs are smaller and the MSEs are better matched with the corresponding asymptotic variances. For both the estimators, as the sample size increases, the MSEs and the biases of the estimates of both components, decrease thus showing consistency.

Parameters
True values 2.1 0.1 1.25 0.25 1.5 0.5 1.75 0.75
ALSEs
Average 2.1154 0.0994 1.2587 0.2500 1.5411 0.4988 1.7664 0.7493
Bias 0.0154 -0.0006 0.0087 1.01E-05 0.0411 -0.0012 0.0164 -0.0007
MSE 2.36E-04 3.48E-07 7.67E-05 4.85E-10 2.36E-04 1.45E-06 2.68E-04 4.85E-10
0.1 LSEs
Average 2.1031 0.0998 1.2565 0.2500 1.5017 0.5000 1.7510 0.7500
Bias 0.0031 -0.0002 0.0065 3.83E-05 0.0017 -2.16E-05 0.0010 -2.92E-05
MSE 9.70E-06 3.14E-08 4.23E-05 1.85E-09 3.71E-06 1.75E-09 1.93E-06 2.11E-09
AVar 2.40E-07 3.60E-10 2.40E-07 3.60E-10 7.56E-07 1.13E-09 7.56E-07 1.13E-09
ALSEs
Average 2.1154 0.0994 1.2586 0.2500 1.5412 0.4988 1.7664 0.7493
Bias 0.0154 -0.0006 0.0086 1.49E-05 0.0412 -0.0012 0.0164 -0.0007
MSE 2.44E-04 3.59E-07 8.02E-05 8.99E-09 2.44E-04 1.48E-06 2.87E-04 8.99E-09
0.5 LSEs
Average 2.1031 0.0998 1.2563 0.2500 1.5017 0.5000 1.7510 0.7500
Bias 0.0031 -0.0002 0.0063 4.40E-05 0.0017 -2.25E-05 0.0010 -3.13E-05
MSE 1.66E-05 4.03E-08 4.63E-05 1.04E-08 2.48E-05 3.16E-08 2.55E-05 3.50E-08
AVar 5.99E-06 8.99E-09 5.99E-06 8.99E-09 1.89E-05 2.84E-08 1.89E-05 2.84E-08
ALSEs
Average 2.1154 0.0994 1.2585 0.2500 1.5408 0.4988 1.7665 0.7493
Bias 0.0154 -0.0006 0.0085 1.88E-05 0.0408 -0.0012 0.0165 -0.0007
MSE 2.65E-04 3.84E-07 9.75E-05 3.93E-08 2.65E-04 1.53E-06 3.38E-04 3.93E-08
1 LSEs
Average 2.1031 0.0998 1.2563 0.2500 1.5015 0.5000 1.7513 0.7500
Bias 0.0031 -0.0002 0.0063 4.78E-05 0.0015 -1.40E-05 0.0013 -4.21E-05
MSE 3.63E-05 6.50E-08 6.50E-05 3.98E-08 8.57E-05 1.22E-07 8.44E-05 1.18E-07
AVar 2.40E-05 3.60E-08 2.40E-05 3.60E-08 7.56E-05 1.13E-07 7.56E-05 1.13E-07
Table 9: Estimates of the parameters of model (5) when errors are i.i.d Gaussian random variables as defined in (8) and M = N = 25
Parameters
True values 2.1 0.1 1.25 0.25 1.5 0.5 1.75 0.75
ALSEs
Average 2.1011 0.1000 1.2597 0.2499 1.5127 0.4997 1.7529 0.7499
Bias 0.0011 -1.36E-05 0.0097 -0.0001 0.0127 -0.0003 0.0029 -5.92E-05
MSE 1.16E-06 1.92E-10 9.37E-05 2.07E-08 1.16E-06 7.47E-08 8.34E-06 2.07E-08
0.1 LSEs
Average 2.1010 0.1000 1.2572 0.2499 1.5007 0.5000 1.7507 0.7500
Bias 0.0010 -1.07E-05 0.0072 -0.0001 0.0007 -1.35E-05 0.0007 -1.19E-05
MSE 1.12E-06 1.24E-10 5.18E-05 1.49E-08 6.03E-07 1.99E-10 5.16E-07 1.60E-10
AVar 1.50E-08 5.62E-12 1.50E-08 5.62E-12 4.73E-08 1.77E-11 4.73E-08 1.77E-11
ALSEs
Average 2.1011 0.1000 1.2597 0.2499 1.5127 0.4997 1.7529 0.7499
Bias 0.0011 -1.36E-05 0.0097 -0.0001 0.0127 -0.0003 0.0029 -5.94E-05
MSE 1.57E-06 3.33E-10 9.39E-05 2.08E-08 1.57E-06 7.46E-08 9.66E-06 2.08E-08
0.5 LSEs
Average 2.1011 0.1000 1.2572 0.2499 1.5007 0.5000 1.7507 0.7500
Bias 0.0011 -1.09E-05 0.0072 -0.0001 0.0007 -1.27E-05 0.0007 -1.20E-05
MSE 1.53E-06 2.59E-10 5.22E-05 1.50E-08 1.75E-06 5.97E-10 1.67E-06 5.74E-10
AVar 3.75E-07 1.40E-10 3.75E-07 1.40E-10 1.18E-06 4.43E-10 1.18E-06 4.43E-10
ALSEs
Average 2.1010 0.1000 1.2597 0.2499 1.5127 0.4997 1.7528 0.7499
Bias 0.0010 -1.32E-05 0.0097 -0.0001 0.0127 -0.0003 0.0028 -5.66E-05
MSE 2.69E-06 7.54E-10 9.51E-05 2.13E-08 2.69E-06 7.60E-08 1.28E-05 2.13E-08
1 LSEs
Average 2.1010 0.1000 1.2572 0.2499 1.5007 0.5000 1.7506 0.7500
Bias 0.0010 -1.03E-05 0.0072 -0.0001 0.0007 -1.30E-05 0.0006 -9.31E-06
MSE 2.62E-06 6.72E-10 5.32E-05 1.54E-08 5.14E-06 1.84E-09 5.11E-06 1.80E-09
AVar 1.50E-06 5.62E-10 1.50E-06 5.62E-10 4.73E-06 1.77E-09 4.73E-06 1.77E-09
Table 10: Estimates of the parameters of model (5) when errors are i.i.d Gaussian random variables as defined in (8) and M = N = 50
Parameters
True values 2.1 0.1