On approximate least squares estimators of parameters on one-dimensional chirp signal

04/04/2018 ∙ by Rhythm Grover, et al. ∙ 0

Chirp signals are quite common in many natural and man-made systems like audio signals, sonar, radar etc. Estimation of the unknown parameters of a signal is a fundamental problem in statistical signal processing. Recently, Kundu and Nandi 2008 studied the asymptotic properties of least squares estimators of the unknown parameters of a simple chirp signal model under the assumption of stationary noise. In this paper, we propose periodogram-type estimators called the approximate least squares estimators to estimate the unknown parameters and study the asymptotic properties of these estimators under the same error assumptions. It is observed that the approximate least squares estimators are strongly consistent and asymptotically equivalent to the least squares estimators. Similar to the periodogram estimators, these estimators can also be used as initial guesses to find the least squares estimators of the unknown parameters. We perform some numerical simulations to see the performance of the proposed estimators and compare them with the least squares estimators and the estimators proposed by Lahiri et al., 2013. We have analysed two real data sets for illustrative purposes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we consider the following multiple component chirp signal model:

(1)

for  . Here is the real valued signal observed at . s, s are real valued amplitudes and s, s are the frequencies and the frequency rates, respectively and is the number of components of the model. Here,

is a sequence of error random variables with mean zero and finite fourth moment. The explicit assumption on the error structure is provided in Section 

2.

Unlike the sinusoidal signal, a chirp signal has a frequency that changes with time. These signals occur in many physical phenomena of interest in science and engineering. Chirp model has its roots in radar signal modelling and is used in various forms for modelling trajectories of moving objects. Also many estimation procedures have been proposed in the literature, for the estimation of the unknown parameters of chirp signals, which is of primary interest. See Bello [1960], Kelly [1961], Abatzoglou [1986], Djuric and Kay [1990], Peleg and Porat [1991], Shamsunder et al., [1995], Ikram et al., [1997], Besson et al., [1999], Saha and Kay [2002], Nandi and Kundu [2004], Kundu and Nandi [2008] and references cited therein. For recent references, see Lahiri et al., [2014], [2015] and Mazumder [2016].

Least squares estimators (LSEs) are a reasonable choice for estimating the unknown parameters of a linear or a non-linear model. The theoretical properties of the LSEs for a chirp signal model, were first obtained by Nandi and Kundu [2004]

under the assumption that the additive errors are independently and identically distributed (i.i.d.) random variables with mean zero and finite variance. They proved that if the errors are i.i.d normal, the asymptotic variances attain the Cramer Rao lower bound. Since in practice, the errors may not be independent, so to make the model more realistic, Kundu and Nandi

[2008] assumed stationarity of the error component to incorporate the dependence structure and studied the properties of the LSEs of the same model. It is observed that dispersion matrix of the asymptotic distribution of the LSEs turns out to be quite complicated. Using a number theoretic result of Vinogradov [1954], Lahiri et al., [2015] provided a simplified structure of this dispersion matrix.

Although the LSEs have nice theroetical properties, finding the least squares estimates is computationally quite demanding. For instance, for the sinusoidal model, it has been observed by Rice and Rosenblatt [1988], that the least squares surface has several local minima near the true parameter value (see Fig. 1, page 481) and due to this reason most of the iterative procedures, even when they converge, often converge to a local minimum rather than a global minimum. The same problem is observed for the chirp model. Thus a very good set of initial values are required for any iterative method to work.

One of the most popular estimators for finding the initial values for the frequencies of the sinusoidal model are the periodogram estimators (PEs). These are obtained by maximizing the following periodogram function:

(2)

at the Fourier frequencies, namely at ; . It has been proved that if the periodogram function is maximised over the entire range , the estimators obtained, called the approximate least squares estimators (ALSEs), are consistent and asymptotically equivalent to the least squares estimators (see Whittle [1952], Walker [1971]). In this paper, we study the behaviour of the periodogram-type estimators, of the unknown parameters of the chirp model and see how they compare with the corresponding least squares estimators theoretically. Analogous to the periodogram function for the sinusoidal model, a periodogram-type function for the chirp model can be defined as follows:

(3)

Corresponding to the Fourier frequencies at which is maximised for the sinusoidal model, it seems reasonable that for the chirp model, we maximise at ; , to obtain the initial guesses for the frequency and frequency rate parameters, respectively.

Consider the periodogram-like function defined in equation (3), which can also be written as:

(4)

The ALSEs of and are obtained by maximising with respect to and simultaneously. Our primary focus is to estimate the non-linear parameters and , and once we estimate these parameters efficiently, the linear parameters and

can be obtained by separable linear regression technique of Richards

[1961].

In this paper, we prove that the ALSEs are strongly consistent. As a matter of fact, the consistency of the ALSEs of the linear parameters and is obtained under slightly weaker conditions than that of the LSEs, as we do not require their parameter space to be bounded in this case. Also the rate of convergence of the ALSEs of the linear parameters is and those of the frequency and frequency rate are and , respectively. The convergence rates of ALSEs are thus same as that of their corresponding LSEs. We show that the asymptotic distribution of the ALSEs is equivalent to that of the LSEs.

Recently, Lahiri et al., [2013], proposed an efficient algorithm to compute the estimators of the unknown parameters of the chirp model. We perform numerical simulations to compare the proposed ALSEs with the LSEs and the estimators obtained by the efficient algorithm. We observe that for most of the cases, although the LSEs provide the best results, the time taken by the ALSEs is comparatively less. Among the three estimators, the estimators computed using the efficient algorithm, takes the least amount of time, though the biases and MSEs increase as compared to the other two estimators.

The rest of the paper is organised as follows. In section 2, we prove the consistency of the ALSEs and their asymptotic equivalence to the LSEs. In section 3, we discuss about the parameter estimation for the multiple component chirp model. In section 4 we present some simulation results and in section 5, we analyze some real life data sets for illustrative purposes. Finally, in section 6 we conclude the paper. All the proofs have been provided in the appendices.

2 Main Results for the One Component Chirp Model

In this section, we study the asymptotic properties of the following one component chirp model:

(5)

We will use the following notations: = (, , , ), = (, , , ), = (, , , ), the LSE of , and = , the ALSE of . The following assumptions are made on the error component of model (5): Assumption 1. Let be the set of integers. is a stationary linear process with the following form:

(6)

where is a sequence of i.i.d random variables with , , and s are real constants such that

(7)

This is a standard assumption for a stationary linear process. Any finite dimensional stationary MA, AR or ARMA process can be represented as (6) when the coefficients s satisfy condition (7) and hence this covers a large class of stationary random variables.
Let , , and , be the ALSEs of , , and , respectively. First we find and by maximising , as defined in (4) with respect to and and once we obtain and , the ALSEs of the linear parameters and can be obtained as follows:

(8)

In the following two theorems, we state the consistency of the ALSE, .

Theorem 1.

Let (, ) be an interior point of [0, ] [0, ]. If {X(t)} satisfies Assumption 1, then the ALSEs and are strongly consistent estimators of and , respectively.

Proof.

See Appendix A. ∎

Theorem 2.

Under the conditions of Theorem 1, the ALSEs and of the linear parameters and are strongly consistent estimators.

Proof.

See Appendix A. ∎

It has been observed in the following theorem that ALSEs have the same distribution as the LSEs asymptotically.

Theorem 3.

Under the Assumption 1, the limiting distribution of is same as that of , where is the ALSE of and is the LSE of and =

Proof.

See Appendix B. ∎

3 Main Results for the Multiple Component Chirp Model

In this section, we consider a chirp signal model with multiple components. Mathematically, a multiple-component chirp model is given by:

(9)

where is the real valued signal observed at = 1, 2 , , , and are the amplitudes and and are the frequencies and frequency rates, respectively for = 1, 2, , .

To estimate the unknown parameters, we propose a sequential procedure to find the ALSEs. This method reduces the computational complexity of the estimators significantly without compromising on their efficiency. Following is the algorithm to find the ALSEs through sequential method:
Step 1: Compute and by maximizing the periodogram-like function

(10)

Then the linear parameter estimates can be obtained by substituting and in (8). Thus

Step 2: Now we have the estimates of the parameters of the first component of the observed signal. We subtract the contribution of the first component from the original signal to remove the effect of the first component and obtain new data, say

Step 3: Now compute and by maximizing

which is obtained by replacing the original data vector by the new data vector in (

10) and and by substituting and in (8).

Step 4: Continue the process upto -steps.

Note that we use the following notation: the parameter vector and the true parameter vector for all = 1, 2, , and the parameter space

Next to establish the asymptotic properties of these estimators, we further make the following model assumptions:

Assumption 2. is an interior point of and the frequencies and the frequency rates are such that .

Assumption 3. s and s satisfy the following relationship:

In the following theorems we prove that the ALSEs obtained by the sequential method described above are strongly consistent.

Theorem 4.

Under the assumptions 1, 2 and 3, and are strongly consistent estimators of and respectively, that is, as .

Proof.

See Appendix C. ∎

Theorem 5.

If the assumptions 1, 2 and 3 are satisfied and p 2, and are strongly consistent estimators of and , respectively, that is, as .

Proof.

See Appendix C. ∎

The result obtained in the above theorem can be extended upto the -th step. Thus for any , the ALSEs obtained at the -th step are strongly consistent.

Theorem 6.

If the assumptions 1, 2 and 3 are satisfied, and if , , and are the estimators obtained at the -th step, and k p then 0 and 0 as .

Proof.

See Appendix C. ∎

Lahiri et al., [2015] proved that the ordinary LSEs of the unknown parameters of the -component chirp model have the following asymptotic distribution:





where,

(11)

Also, note that

(12)

We have the following result regarding the asymptotic distribution of the ALSEs.

Theorem 7.

Under the assumptions 1, 2, and 3, the asymptotic distribution of is equivalent to the asymptotic distribution of , for all , where is the ALSE and is the LSE of the unknown parameter vector associated with the -th component of the component model.

Proof.

See Appendix D. ∎

4 Numerical Experiments

In this section, we present simulation studies for one component and two component chirp models. We first consider the following one component chirp model:


with the true parameter values and and is an MA(1) process, that is , with and s are i.i.d. normal random variables with mean zero and variance . For simulations we consider different : 0.1, 0.5 and 1. The different sample sizes we use are , and and for each we replicate the process, that is generate the data and obtain the estimates 1000 times. We estimate the parameters by the least squares estimation method, the approximate least squares estimation method and using the efficient algorithm as proposed by Lahiri et al., [2013].

For the LSEs, we first minimize the error sum of squares function with respect to and using the Nelder and Mead method of optimization (using optim function in the R Stats Package). For the initial values, it is intuitive to minimize the function over the grid , analogous to what is suggested by Rice and Rosenblatt [1988] for the sinusoidal model. For the ALSEs, we maximize the periodogram-like function , as defined in (4), again using the Nelder and Mead method and the starting values are obtained by maximizing on grid points as used for the corresponding LSEs.

Non-linear Parameters
True values 2.5 0.1 2.5 0.1 2.5 0.1
LSEs ALSEs Efficient Algorithm
0.1 Time (s) 17.2249 15.3530 2.2450
Average 2.5000 0.1000 2.4967 0.1000 2.4801 0.0997
Bias 4.15e-06 -9.36e-09 -3.26e-03 9.28e-06 -1.99e-02 -2.54e-04
MSE 1.80e-07 2.68e-12 1.10e-05 9.12e-11 8.22e-03 1.67e-07
Avar 1.26e-07 1.88e-12 1.26e-07 1.88e-12 1.26e-07 1.88e-12
0.5 Time (s) 17.4460 13.4589 2.2410
Average 2.5000 0.1000 2.4968 0.1000 2.5063 0.0999
Bias 4.63e-05 -1.97e-07 -3.19e-03 8.98e-06 6.27e-03 -1.29e-04
MSE 8.84e-07 1.33e-11 1.18e-05 1.05e-10 1.96e-02 3.29e-07
Avar 6.28e-07 9.42e-12 6.28e-07 9.42e-12 6.28e-07 9.42e-12
1 Time (s) 18.2910 14.2689 2.4050
Average 2.5000 0.1000 2.4968 0.1000 2.5021 0.0999
Bias -7.89e-06 2.49e-08 -3.25e-03 9.23e-06 2.13e-03 -1.27e-04
MSE 1.89e-06 2.94e-11 1.40e-05 1.37e-10 4.62e-02 6.47e-07
Avar 1.26e-06 1.88e-11 1.26e-06 1.88e-11 1.26e-06 1.88e-11
Table 1: Estimates of one component model when sample size is 250

Tables 12 and 3 provide the results, averaged over 1000 simulation runs, we obtain for the one component model. In these tables, we observe that, the ALSEs have very small bias in the absolute value. The MSEs of the LSEs are very close to their asymptotic variances and the MSEs of the ALSEs also get very close to those of LSEs as increases and hence to the theoretical asymptotic variances of the LSEs, showing that they are asymptotically equivalent. Also when we increase the sample size, the MSEs of both the estimators decrease showing that they are consistent. We observe that the estimators obtained by the Efficient Algorithm are close to the true values but the bias and the MSEs are not as small as compared with the other two estimators. However the time taken to compute the estimates by the Efficient Algorithm is much less than the time taken by the ALSEs and the LSEs.

Non-linear Parameters
True values 2.5 0.1 2.5 0.1 2.5 0.1
LSEs ALSEs Efficient Algorithm
0.1 Time (s) 30.1289 25.6280 4.3600
Average 2.5000 0.1000 2.4993 0.1000 2.5304 0.1001
Bias -1.19e-05 2.56e-08 -6.79e-04 1.77e-06 3.04e-02 7.11e-05
MSE 2.13e-08 8.09e-14 4.96e-07 3.26e-12 9.89e-03 3.11e-08
Avar 1.57e-08 5.89e-14 1.57e-08 5.89e-14 1.57e-08 5.89e-14
0.5 Time (s) 34.6510 30.3080 5.2440
Average 2.5000 0.1000 2.4994 0.1000 2.5227 0.1000
Bias 1.04e-05 -1.44e-08 -6.47e-04 1.71e-06 2.27e-02 4.54e-05
MSE 1.21e-07 4.45e-13 6.12e-07 3.63e-12 2.35e-02 6.69e-08
Avar 7.85e-08 2.94e-13 7.85e-08 2.94e-13 7.85e-08 2.94e-13
1 Time (s) 32.2790 26.5430 4.4199
Average 2.5000 0.1000 2.4993 0.1000 2.5189 0.1000
Bias -1.61e-05 2.01e-08 -6.77e-04 1.75e-06 1.89e-02 3.15e-05
MSE 2.18e-07 8.08e-13 8.04e-07 4.33e-12 7.32e-03 2.31e-08
Avar 1.57e-07 5.89e-13 1.57e-07 5.89e-13 1.57e-07 5.89e-13
Table 2: Estimates of one component model when sample size is 500
Non-linear Parameters
True values 2.5 0.1 2.5 0.1 2.5 0.1
LSEs ALSEs Efficient Algorithm
0.1 Time (s) 67.1180 62.3369 9.5720
Average 2.5000 0.1000 2.5002 0.1000 2.4984 0.1000
Bias 8.16e-07 -9.15e-10 1.86e-04 -9.30e-08 -1.65e-03 -5.34e-06
MSE 2.95e-09 2.85e-15 3.87e-08 1.21e-14 1.79e-03 1.46e-09
Avar 1.96e-09 1.84e-15 1.96e-09 1.84e-15 1.96e-09 1.84e-15
0.5 Time (s) 61.2009 56.3849 8.2260
Average 2.5000 0.1000 2.5002 0.1000 2.4981 0.1000
Bias 1.80e-06 -1.67e-09 1.86e-04 -9.24e-08 -1.87e-03 -4.21e-06
MSE 1.57e-08 1.55e-14 5.40e-08 2.60e-14 1.58e-03 1.32e-09
Avar 9.81e-09 9.20e-15 9.81e-09 9.20e-15 9.81e-09 9.20e-15
1 Time (s) 62.5589 56.3840 8.2129
Average 2.5000 0.1000 2.5002 0.1000 2.4948 0.1000
Bias 3.32e-06 -8.67e-10 1.88e-04 -9.19e-08 -5.20e-03 -6.51e-06
MSE 3.10e-08 2.95e-14 7.41e-08 4.22e-14 1.40e-03 1.13e-09
Avar 1.96e-08 1.84e-14 1.96e-08 1.84e-14 1.96e-08 1.84e-14
Table 3: Estimates of one component model when sample size is 1000

We also perform simulations for the following two component model using the proposed sequential estimators:

For simulation, we take the true values as = 2, = 1.75, = 1.5, = 0.1, = 3, = 2.25, = 2.5 and = 0.2, and compute both the LSEs and the ALSEs of all the unknown parameters, sequentially. The error structure is same as that for one component simulation study.

Non-linear Parameters
LSEs ALSEs Efficient Algorithm
True values 1.5 0.1 1.5 0.1 1.5 0.1
0.1 Time (s) 31.918 25.5269 4.5620
Average 1.5074 0.1000 1.5044 0.1000 1.4711 0.1001
Bias 7.43e-03 -2.57e-05 4.40e-03 -1.59e-05 -2.89e-02 7.77e-05
MSE 5.58e-05 6.70e-10 2.00e-05 2.62e-10 1.63e-02 2.25e-07
Avar 1.09e-07 1.64e-12 1.09e-07 1.64e-12 1.09e-07 1.64e-12
0.5 Time (s) 32.3660 26.2630 4.5389
Average 1.5075 0.1000 1.5045 0.1000 1.4832 0.1001
Bias 7.51e-03 -2.60e-05 4.48e-03 -1.62e-05 -1.68e-02 1.18e-04
MSE 5.80e-05 7.04e-10 2.29e-05 3.15e-10 2.41e-02 2.98e-07
Avar 5.46e-07 8.19e-12 5.46e-07 8.19e-12 5.46e-07 8.19e-12
1 Time (s) 32.6730 26.5839 4.5110
Average 1.5074 0.1000 1.5043 0.1000 1.4809 0.1001
Bias 7.37e-03 -2.55e-05 4.33e-03 -1.56e-05 -1.91e-02 1.14e-04
MSE 5.75e-05 6.98e-10 2.40e-05 3.40e-10 3.29e-02 4.15e-07
Avar 1.09e-06 1.64e-11 1.09e-06 1.64e-11 1.09e-06 1.64e-11
Non-linear Parameters
True values 2.5 0.2 2.5 0.2 2.5 0.2
0.1 Time (s) 31.918 25.5269 4.5620
Average 2.4999 0.2000 2.5000 0.2000 2.4548 0.1998
Bias -1.22e-04 1.82e-07 -1.40e-05 -3.43e-06 -4.52e-02 -1.67e-04
MSE 1.96e-07 2.74e-12 2.02e-07 1.46e-11 3.44e-02 3.89e-07
Avar 2.17e-07 3.26e-12 2.17e-07 3.26e-12 2.17e-07 3.26e-12
0.5 Time (s) 32.3660 26.2630 4.5389
Average 2.5000 0.2000 2.5001 0.2000 2.4744 0.1999
Bias -2.84e-05 -1.71e-07 9.07e-05 -3.82e-06 -2.56e-02 -9.02e-05
MSE 7.73e-07 1.13e-11 8.81e-07 2.68e-11 3.67e-02 4.52e-07
Avar 1.09e-06 1.63e-11 1.09e-06 1.63e-11 1.09e-06 1.63e-11
1 Time (s) 32.6730 26.5839 4.5110
Average 2.4999 0.2000 2.5000 0.2000 2.4707 0.1999
Bias -1.26e-04 1.62e-07 -2.08e-05 -3.43e-06 -2.93e-02 -8.50e-05
MSE 1.82e-06 2.65e-11 2.03e-06 4.04e-11 2.61e-02 3.63e-07
Avar 2.17e-06 3.26e-11 2.17e-06 3.26e-11 2.17e-06 3.26e-11
Table 4: Estimates of the two component model when sample size is 250
Non-linear Parameters
LSEs ALSEs Efficient Algorithm
True values 1.5 0.1 1.5 0.1 1.5 0.1
0.1 Time (s) 61.0879 55.7359 8.4870
Average 1.5020 0.1000 1.5011 0.1000 1.4798 0.1000
Bias 1.98e-03 -4.30e-06 1.13e-03 -2.49e-06 -2.02e-02 -4.34e-05
MSE 4.01e-06 1.88e-11 1.33e-06 6.40e-12 1.17e-02 2.82e-08
Avar 1.37e-08 5.12e-14 1.37e-08 5.12e-14 1.37e-08 5.12e-14
0.5 Time (s) 61.8270 55.9599 8.4100
Average 1.5020 0.1000 1.5011 0.1000 1.4840 0.1000
Bias 1.97e-03 -4.29e-06 1.13e-03 -2.48e-06 -1.60e-02 -3.96e-05
MSE 4.12e-06 1.92e-11 1.49e-06 7.02e-12 1.19e-02 3.37e-08
Avar 6.83e-08 2.56e-13 6.83e-08 2.56e-13 6.83e-08 2.56e-13
1 Time (s) 63.3360 57.1080 8.7080
Average 1.5020 0.1000 1.5011 0.1000 1.4832 0.1000
Bias 1.99e-03 -4.32e-06 1.14e-03 -2.53e-06 -1.68e-02 -2.97e-05
MSE 4.35e-06 2.03e-11 1.76e-06 8.22e-12 8.34e-03 2.37e-08
Avar 1.37e-07 5.12e-13 1.37e-07 5.12e-13 1.37e-07 5.12e-13
Non-linear Parameters
True values 2.5 0.2 2.5 0.2 2.5 0.2
0.1 Time (s) 61.0879 55.7359 8.4870
Average 2.4999 0.2000 2.4987 0.2000 2.4861 0.1999
Bias -5.39e-05 1.35e-08 -1.26e-03 2.13e-06 -1.39e-02 -7.72e-05
MSE 2.32e-08 7.37e-14 1.61e-06 4.66e-12 3.04e-03 1.66e-08
Avar 2.72e-08 1.02e-13 2.72e-08 1.02e-13 2.72e-08 1.02e-13
0.5 Time (s) 61.8270 55.9599 8.4100
Average 2.5000 0.2000 2.4988 0.2000 2.5017 0.2000
Bias -4.13e-05 -1.44e-08 -1.24e-03 2.09e-06 1.65e-03 -4.06e-05
MSE 9.53e-08 3.53e-13 1.67e-06 4.89e-12 2.89e-03 1.28e-08
Avar 1.36e-07 5.10e-13 1.36e-07 5.10e-13 1.36e-07 5.10e-13
1 Time (s) 63.3360 57.1080 8.7080
Average 2.5000 0.2000 2.4988 0.2000 2.5102 0.2000
Bias -3.38e-05 -1.78e-08 -1.23e-03 2.08e-06 1.02e-02 -1.78e-05
MSE 1.98e-07 7.29e-13 1.78e-06 5.37e-12 4.49e-03 1.55e-08
Avar 2.72e-07 1.02e-12 2.72e-07 1.02e-12 2.72e-07 1.02e-12
Table 5: Estimates of the two component model when sample size is 500

This process of data generation and estimation of the unknown parameters is replicated 1000 times and we calculate the average values, bias and MSEs of these estimates. We also report the time taken for the entire simulation process by each of the estimation methods. We compute the asymptotic variance of the estimates to compare the MSEs with them. Simulation results provided in tables 45 and 6, for the two component model, show that the MSEs of the proposed sequential estimators are well matched to the MSEs of LSEs and they become close as increases. Also they are comparable to the asymptotic variance of the LSEs. In many cases, it is observed that the MSEs of the ALSEs of the first component, are smaller than the corresponding LSEs. In all the tables, it is consistently observed that compared to the LSEs, computation of the ALSEs takes lesser time.

Non-linear Parameters
LSEs ALSEs Efficient Algorithm
True values 1.5 0.1 1.5 0.1 1.5 0.1
0.1 Time (s) 124.913 114.535 16.7209
Average 1.4999 0.1000 1.5002 0.1000 1.5206 0.1000
Bias -7.22e-05 -1.91e-09 1.87e-04 -3.52e-07 2.06e-02 1.68e-05
MSE 1.02e-08 5.16e-15 4.17e-08 1.30e-13 7.69e-03 5.54e-09
Avar 1.71e-09 1.60e-15 1.71e-09 1.60e-15 1.71e-09 1.60e-15
0.5 Time (s) 118.263 115.557 16.3389
Average 1.4999 0.1000 1.5002 0.1000 1.5168 0.1000
Bias -7.51e-05 1.25e-09 1.97e-04 -3.61e-07 1.68e-02 1.20e-05
MSE 2.91e-08 2.47e-14 6.41e-08 1.54e-13 2.32e-03 1.73e-09
Avar 8.53e-09 8.00e-15 8.53e-09 8.00e-15 8.53e-09 8.00e-15
1 Time (s) 118.7809 114.4330 16.3170
Average 1.4999 0.1000 1.5002 0.1000 1.5111 0.1000
Bias -7.32e-05 7.49e-10 2.04e-04 -3.66e-07 1.11e-02 6.34e-06
MSE 5.45e-08 5.07e-14 9.13e-08 1.81e-13 1.18e-03 8.62e-10
Avar 1.71e-08 1.60e-14 1.71e-08 1.60e-14 1.71e-08 1.60e-14
Non-linear Parameters
True values 2.5 0.2 2.5 0.2 2.5 0.2
0.1 Time (s) 124.913 114.535 16.7209
Average 2.5000 0.2000 2.4998 0.2000 2.4958 0.2000
Bias 2.29e-05 -1.69e-08 -2.44e-04 3.25e-07 -4.15e-03 -1.30e-06
MSE 3.32e-09 2.90e-15 6.29e-08 1.09e-13 9.38e-04 6.46e-10
Avar 3.40e-09 3.19e-15 3.40e-09 3.19e-15 3.40e-09 3.19e-15
0.5 Time (s) 118.263 115.557 16.3389
Average 2.5000 0.2000 2.4998 0.2000 2.4979 0.2000
Bias 2.08e-05 -1.40e-08 -2.45e-04 3.26e-07 -2.13e-03 6.78e-07
MSE 1.43e-08 1.31e-14 7.47e-08 1.20e-13 6.51e-04 5.28e-10
Avar 1.70e-08 1.59e-14 1.70e-08 1.59e-14 1.70e-08 1.59e-14
1 Time (s) 118.7809 114.4330 16.3170
Average 2.5000 0.2000 2.4997 0.2000 2.5007 0.2000
Bias 1.01e-05 -3.41e-09 -2.55e-04 3.37e-07 7.45e-04 2.89e-06
MSE 2.81e-08 2.65e-14 9.42e-08 1.41e-13 1.43e-03 8.21e-10
Avar 3.40e-08 3.19e-14 3.40e-08 3.19e-14 3.40e-08 3.19e-14
Table 6: Estimates of the two component model when sample size is 1000

It is observed that the estimates of the unknown non-linear parameters of the second component for the two component model, that is of , , have very small bias as compared to those obtained at the first stage, that is of , or those obtained for the one component model, , . Since the proposed ALSEs have desirable properties, it is a good idea to obtain the initial estimates by maximising the periodogram-like function as defined in (4) and then carry out the least squares estimation.

5 Real Data Analysis

For illustration, we perform analysis of two speech signal data sets "AHH" and "AAA". These data have been obtained from a sound instrument at the Speech Signal Processing laboratory of the Indian Institute of Technology Kanpur. We have 469 data points in the "AHH" signal data set and 477 data points in the "AAA" signal data set, both sampled at 10 kHz frequency. Figure 1 gives the plot of the observed signal "AHH" and Figure 2 gives the plot of the observed signal "AAA".

Figure 1: AHH: original signal
Figure 2: AAA: original signal

We try to fit a multiple component chirp model to both the data sets, using the proposed sequential estimation procedure which computes ALSEs at each stage. At the same time, we compute the sequential LSEs as proposed by Lahiri et al., [2015] for comparison purposes. To find the initial values of the frequency and frequency rate, at each stage we maximize the periodogram-like function, over a fine grid: , = 1, 2, , , = 1, 2, , . For the estimation of the number of components, we use the following form of BIC:

The model order is estimated as the value of for which the BIC is minimum. For the "AHH" data, when we estimate the parameters using sequential least squares estimation procedure, it is evident from Figure 3 that the number of components that fits this data is 8. Using the proposed sequential ALSEs to fit the model also gives the same estimated number of components which can be seen in Figure 4. The number of components when we estimate the parameters of the "AAA" data, using sequential least squares estimation procedure is 9, as can be seen from Figure 5. The proposed sequential ALSEs also give the same estimated number of components which can be seen in Figure 6.

Figure 3: BIC plot: "AHH" data set when estimates are obtained by sequential LSE procedure.
Figure 4: BIC plot: "AHH" data set when estimates are obtained by sequential ALSE procedure.
Figure 5: BIC plot: "AAA" data set when estimates are obtained by sequential LSE procedure.
Figure 6: BIC plot: "AAA" data set when estimates are obtained by sequential ALSE procedure.

Figure 7 and Figure 8 gives the observed as well as the fitted signal for the "AHH" data, estimated using the sequential LSEs and using the sequential ALSEs, respectively. We observe from these plots that both the fits look similar. Hence we may conclude from here as well, that the ALSEs are equivalent to the LSEs. Figure 9 and Figure 10 give the observed as well as the fitted signal for the "AAA" data, estimated using the sequential LSEs and using the sequential ALSEs, respectively.

Figure 7: Observed "AHH" signal and signal fitted using sequential LSEs.
Figure 8: Observed "AHH" signal and signal fitted using sequential ALSEs.
Figure 9: Observed "AAA" signal and signal fitted using sequential LSEs.
Figure 10: Observed "AAA" signal and signal fitted using sequential ALSEs.

We analyze the residuals by performing an augmented Dickey-Fuller (ADF) test and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test to check for their stationarity. This is done using in-built R-functions "adf.test" and "kpss.test" in "tseries" package in R. ADF test, tests the null hypothesis of unit-root being present in the time series against the alternative of no unit root, that is, stationarity and KPSS test is used for testing a null hypothesis that an observable time series is stationary around a deterministic trend against the alternative of a unit root. For the "AHH" data set, in the ADF test, we reject the null hypothesis and in KPSS test we do not reject the null hypothesis, and thereby from results of both the tests, we conclude that the residuals are stationary. For the "AAA" data set, in the ADF test, we reject the null hypothesis and in KPSS test we do not reject the null hypothesis, and thereby from results of both the tests, we conclude that the residuals are stationary. Figure 

1114 provide the residual plots for the two data sets under the two sequential procedures.

Figure 11: Residual plot: of "AHH" data when the estimation is using LSEs.
Figure 12: Residual plot: of "AHH" data when the estimation is using ALSEs.
Figure 13: Residual plot: of "AAA" data when the estimation is using LSEs.
Figure 14: Residual plot: of "AAA" data when the estimation is using ALSEs.

6 Conclusion

In this paper, we proposed periodogram-type estimators, called the approximate least squares estimators (ALSEs), for the parameters of a one-dimensional one component chirp model and studied their asymptotic properties. We showed that they are consistent and asymptotically equivalent to the LSEs. Also we obtained the consistency of the ALSEs under weaker conditions than those required for the LSEs. For the multiple component chirp model, we proposed a sequential procedure based on calculating ALSEs and at each step of the sequential procedure establish that these are strongly consistent and asymptotically equivalent to the corresponding sequential LSEs, having the same rates of convergence. Simulation studies presented in the paper also confirm this large sample equivalence. Hence one may use the periodogram-like estimators as the initial values to find the LSEs. We also perform analysis of two speech signal data sets for illustrative purposes and the performances are quite satisfactory.

Acknowledgement

The authors would like to thank two unknown reviewers and the associate editor for their constructive comments which have helped to improve the manuscript significantly.

Appendix A

The following lemmas are required to prove Theorem 1.

Lemma 1.

If {X(t)} satisfies Assumption 1, then:

  1. [label=()]

  2. ,

  3. .

Proof.

Refer to Kundu and Nandi [2008].

Lemma 2.

If , then except for a countable number of points, the following results are true:

  1. [label=()]

  2. ,

  3. ,

  4. ,

  5. ,

  6. .

for all k = 0, 1, 2,

Proof.

Refer to Lahiri et al., [2015].

Lemma 3.

Suppose and are the ALSEs of and , respectively. Let us denote and

If there exists a 0, such that

(13)

then almost surely. Here is as defined in (4).

Proof.

Let us denote by = and by to emphasize that they depend on . Suppose (13) is true and does not converge to as Then there exists a 0 such that

Hence, a 0 and a subsequence of such that for all = 1, 2, , that is for all = 1, 2, . Since is the ALSE of when , it maximises ,

Thus, we have which contradicts (13). Hence, the result follows. ∎

Lemma 4.

Suppose and are the ALSEs of and , respectively. Let us define = and = diag(), then

Proof.

Let us denote as the 1 2 first derivative vector, that is, and as the 2 2 second derivative matrix of , that is,

Using multivariate Taylor series expansion of around , we get:

(14)

where is such that Since = 0, (14) can be re-written as the following:

Let us first consider,

Using Lemmas 1 and 2, it can be shown that:

Thus we have, 0. Now, consider the 2 2 matrix . Since and is a continuous function of ,