1 Introduction
In this paper we consider the following multiple component chirp signal model:
(1) 
for . Here is the real valued signal observed at . s, s are real valued amplitudes and s, s are the frequencies and the frequency rates, respectively and is the number of components of the model. Here,
is a sequence of error random variables with mean zero and finite fourth moment. The explicit assumption on the error structure is provided in Section
2.Unlike the sinusoidal signal, a chirp signal has a frequency that changes with time. These signals occur in many physical phenomena of interest in science and engineering. Chirp model has its roots in radar signal modelling and is used in various forms for modelling trajectories of moving objects. Also many estimation procedures have been proposed in the literature, for the estimation of the unknown parameters of chirp signals, which is of primary interest. See Bello [1960], Kelly [1961], Abatzoglou [1986], Djuric and Kay [1990], Peleg and Porat [1991], Shamsunder et al., [1995], Ikram et al., [1997], Besson et al., [1999], Saha and Kay [2002], Nandi and Kundu [2004], Kundu and Nandi [2008] and references cited therein. For recent references, see Lahiri et al., [2014], [2015] and Mazumder [2016].
Least squares estimators (LSEs) are a reasonable choice for estimating the unknown parameters of a linear or a nonlinear model. The theoretical properties of the LSEs for a chirp signal model, were first obtained by Nandi and Kundu [2004]
under the assumption that the additive errors are independently and identically distributed (i.i.d.) random variables with mean zero and finite variance. They proved that if the errors are i.i.d normal, the asymptotic variances attain the Cramer Rao lower bound. Since in practice, the errors may not be independent, so to make the model more realistic, Kundu and Nandi
[2008] assumed stationarity of the error component to incorporate the dependence structure and studied the properties of the LSEs of the same model. It is observed that dispersion matrix of the asymptotic distribution of the LSEs turns out to be quite complicated. Using a number theoretic result of Vinogradov [1954], Lahiri et al., [2015] provided a simplified structure of this dispersion matrix.Although the LSEs have nice theroetical properties, finding the least squares estimates is computationally quite demanding. For instance, for the sinusoidal model, it has been observed by Rice and Rosenblatt [1988], that the least squares surface has several local minima near the true parameter value (see Fig. 1, page 481) and due to this reason most of the iterative procedures, even when they converge, often converge to a local minimum rather than a global minimum. The same problem is observed for the chirp model. Thus a very good set of initial values are required for any iterative method to work.
One of the most popular estimators for finding the initial values for the frequencies of the sinusoidal model are the periodogram estimators (PEs). These are obtained by maximizing the following periodogram function:
(2) 
at the Fourier frequencies, namely at ; . It has been proved that if the periodogram function is maximised over the entire range , the estimators obtained, called the approximate least squares estimators (ALSEs), are consistent and asymptotically equivalent to the least squares estimators (see Whittle [1952], Walker [1971]). In this paper, we study the behaviour of the periodogramtype estimators, of the unknown parameters of the chirp model and see how they compare with the corresponding least squares estimators theoretically. Analogous to the periodogram function for the sinusoidal model, a periodogramtype function for the chirp model can be defined as follows:
(3) 
Corresponding to the Fourier frequencies at which is maximised for the sinusoidal model, it seems reasonable that for the chirp model, we maximise at ; , to obtain the initial guesses for the frequency and frequency rate parameters, respectively.
Consider the periodogramlike function defined in equation (3), which can also be written as:
(4) 
The ALSEs of and are obtained by maximising with respect to and simultaneously. Our primary focus is to estimate the nonlinear parameters and , and once we estimate these parameters efficiently, the linear parameters and
can be obtained by separable linear regression technique of Richards
[1961].In this paper, we prove that the ALSEs are strongly consistent. As a matter of fact, the consistency of the ALSEs of the linear parameters and is obtained under slightly weaker conditions than that of the LSEs, as we do not require their parameter space to be bounded in this case. Also the rate of convergence of the ALSEs of the linear parameters is and those of the frequency and frequency rate are and , respectively. The convergence rates of ALSEs are thus same as that of their corresponding LSEs. We show that the asymptotic distribution of the ALSEs is equivalent to that of the LSEs.
Recently, Lahiri et al., [2013], proposed an efficient algorithm to compute the estimators of the unknown parameters of the chirp model. We perform numerical simulations to compare the proposed ALSEs with the LSEs and the estimators obtained by the efficient algorithm. We observe that for most of the cases, although the LSEs provide the best results, the time taken by the ALSEs is comparatively less. Among the three estimators, the estimators computed using the efficient algorithm, takes the least amount of time, though the biases and MSEs increase as compared to the other two estimators.
The rest of the paper is organised as follows. In section 2, we prove the consistency of the ALSEs and their asymptotic equivalence to the LSEs. In section 3, we discuss about the parameter estimation for the multiple component chirp model. In section 4 we present some simulation results and in section 5, we analyze some real life data sets for illustrative purposes. Finally, in section 6 we conclude the paper. All the proofs have been provided in the appendices.
2 Main Results for the One Component Chirp Model
In this section, we study the asymptotic properties of the following one component chirp model:
(5) 
We will use the following notations: = (, , , ), = (, , , ), = (, , , ), the LSE of , and = , the ALSE of . The following assumptions are made on the error component of model (5): Assumption 1. Let be the set of integers. is a stationary linear process with the following form:
(6) 
where is a sequence of i.i.d random variables with , , and s are real constants such that
(7) 
This is a standard assumption for a stationary linear process. Any finite dimensional stationary MA, AR or ARMA process can be represented as (6) when the coefficients s satisfy condition (7) and hence this covers a large class of stationary random variables.
Let , , and , be the ALSEs of , , and , respectively. First we find and by maximising , as defined in (4) with respect to and and once we obtain and , the ALSEs of the linear parameters and can be obtained as follows:
(8) 
In the following two theorems, we state the consistency of the ALSE, .
Theorem 1.
Let (, ) be an interior point of [0, ] [0, ]. If {X(t)} satisfies Assumption 1, then the ALSEs and are strongly consistent estimators of and , respectively.
Proof.
See Appendix A. ∎
Theorem 2.
Under the conditions of Theorem 1, the ALSEs and of the linear parameters and are strongly consistent estimators.
Proof.
See Appendix A. ∎
It has been observed in the following theorem that ALSEs have the same distribution as the LSEs asymptotically.
Theorem 3.
Under the Assumption 1, the limiting distribution of is same as that of , where is the ALSE of and is the LSE of and =
Proof.
See Appendix B. ∎
3 Main Results for the Multiple Component Chirp Model
In this section, we consider a chirp signal model with multiple components. Mathematically, a multiplecomponent chirp model is given by:
(9) 
where is the real valued signal observed at = 1, 2 , , , and are the amplitudes and and are the frequencies and frequency rates, respectively for = 1, 2, , .
To estimate the unknown parameters, we propose a sequential procedure to find the ALSEs. This method reduces the computational complexity of the estimators significantly without compromising on their efficiency. Following is the algorithm to find the ALSEs through sequential method:
Step 1: Compute and by maximizing the periodogramlike function
(10) 
Then the linear parameter estimates can be obtained by substituting and in (8). Thus
Step 2: Now we have the estimates of the parameters of the first component of the observed signal. We subtract the contribution of the first component from the original signal to remove the effect of the first component and obtain new data, say
Step 3: Now compute and by maximizing
which is obtained by replacing the original data vector by the new data vector in (
10) and and by substituting and in (8).Step 4: Continue the process upto steps.
Note that we use the following notation: the parameter vector and the true parameter vector for all = 1, 2, , and the parameter space
Next to establish the asymptotic properties of these estimators, we further make the following model assumptions:
Assumption 2. is an interior point of and the frequencies and the frequency rates are such that .
Assumption 3. s and s satisfy the following relationship:
In the following theorems we prove that the ALSEs obtained by the sequential method described above are strongly consistent.
Theorem 4.
Under the assumptions 1, 2 and 3, and are strongly consistent estimators of and respectively, that is, as .
Proof.
See Appendix C. ∎
Theorem 5.
If the assumptions 1, 2 and 3 are satisfied and p 2, and are strongly consistent estimators of and , respectively, that is, as .
Proof.
See Appendix C. ∎
The result obtained in the above theorem can be extended upto the th step. Thus for any , the ALSEs obtained at the th step are strongly consistent.
Theorem 6.
If the assumptions 1, 2 and 3 are satisfied, and if , , and are the estimators obtained at the th step, and k p then 0 and 0 as .
Proof.
See Appendix C. ∎
Lahiri et al., [2015] proved that the ordinary LSEs of the unknown parameters of the component chirp model have the following asymptotic distribution:
where,
(11) 
Also, note that
(12) 
We have the following result regarding the asymptotic distribution of the ALSEs.
Theorem 7.
Under the assumptions 1, 2, and 3, the asymptotic distribution of is equivalent to the asymptotic distribution of , for all , where is the ALSE and is the LSE of the unknown parameter vector associated with the th component of the component model.
Proof.
See Appendix D. ∎
4 Numerical Experiments
In this section, we present simulation studies for one component and two component chirp models. We first consider the following one component chirp model:
with the true parameter values and and is an MA(1) process, that is , with and s are i.i.d. normal random variables with mean zero and variance . For simulations we consider different : 0.1, 0.5 and 1. The different sample sizes we use are , and and for each we replicate the process, that is generate the data and obtain the estimates 1000 times. We estimate the parameters by the least squares estimation method, the approximate least squares estimation method and using the efficient algorithm as proposed by Lahiri et al., [2013].
For the LSEs, we first minimize the error sum of squares function with respect to and using the Nelder and Mead method of optimization (using optim function in the R Stats Package). For the initial values, it is intuitive to minimize the function over the grid , analogous to what is suggested by Rice and Rosenblatt [1988] for the sinusoidal model.
For the ALSEs, we maximize the periodogramlike function , as defined in (4), again using the Nelder and Mead method and the starting values are obtained by maximizing on grid points as used for the corresponding LSEs.
Nonlinear Parameters  
True values  2.5  0.1  2.5  0.1  2.5  0.1  
LSEs  ALSEs  Efficient Algorithm  
0.1  Time (s)  17.2249  15.3530  2.2450  
Average  2.5000  0.1000  2.4967  0.1000  2.4801  0.0997  
Bias  4.15e06  9.36e09  3.26e03  9.28e06  1.99e02  2.54e04  
MSE  1.80e07  2.68e12  1.10e05  9.12e11  8.22e03  1.67e07  
Avar  1.26e07  1.88e12  1.26e07  1.88e12  1.26e07  1.88e12  
0.5  Time (s)  17.4460  13.4589  2.2410  
Average  2.5000  0.1000  2.4968  0.1000  2.5063  0.0999  
Bias  4.63e05  1.97e07  3.19e03  8.98e06  6.27e03  1.29e04  
MSE  8.84e07  1.33e11  1.18e05  1.05e10  1.96e02  3.29e07  
Avar  6.28e07  9.42e12  6.28e07  9.42e12  6.28e07  9.42e12  
1  Time (s)  18.2910  14.2689  2.4050  
Average  2.5000  0.1000  2.4968  0.1000  2.5021  0.0999  
Bias  7.89e06  2.49e08  3.25e03  9.23e06  2.13e03  1.27e04  
MSE  1.89e06  2.94e11  1.40e05  1.37e10  4.62e02  6.47e07  
Avar  1.26e06  1.88e11  1.26e06  1.88e11  1.26e06  1.88e11 
Tables 1, 2 and 3 provide the results, averaged over 1000 simulation runs, we obtain for the one component model. In these tables, we observe that, the ALSEs have very small bias in the absolute value. The MSEs of the LSEs are very close to their asymptotic variances and the MSEs of the ALSEs also get very close to those of LSEs as increases and hence to the theoretical asymptotic variances of the LSEs, showing that they are asymptotically equivalent. Also when we increase the sample size, the MSEs of both the estimators decrease showing that they are consistent. We observe that the estimators obtained by the Efficient Algorithm are close to the true values but the bias and the MSEs are not as small as compared with the other two estimators. However the time taken to compute the estimates by the Efficient Algorithm is much less than the time taken by the ALSEs and the LSEs.
Nonlinear Parameters  
True values  2.5  0.1  2.5  0.1  2.5  0.1  
LSEs  ALSEs  Efficient Algorithm  
0.1  Time (s)  30.1289  25.6280  4.3600  
Average  2.5000  0.1000  2.4993  0.1000  2.5304  0.1001  
Bias  1.19e05  2.56e08  6.79e04  1.77e06  3.04e02  7.11e05  
MSE  2.13e08  8.09e14  4.96e07  3.26e12  9.89e03  3.11e08  
Avar  1.57e08  5.89e14  1.57e08  5.89e14  1.57e08  5.89e14  
0.5  Time (s)  34.6510  30.3080  5.2440  
Average  2.5000  0.1000  2.4994  0.1000  2.5227  0.1000  
Bias  1.04e05  1.44e08  6.47e04  1.71e06  2.27e02  4.54e05  
MSE  1.21e07  4.45e13  6.12e07  3.63e12  2.35e02  6.69e08  
Avar  7.85e08  2.94e13  7.85e08  2.94e13  7.85e08  2.94e13  
1  Time (s)  32.2790  26.5430  4.4199  
Average  2.5000  0.1000  2.4993  0.1000  2.5189  0.1000  
Bias  1.61e05  2.01e08  6.77e04  1.75e06  1.89e02  3.15e05  
MSE  2.18e07  8.08e13  8.04e07  4.33e12  7.32e03  2.31e08  
Avar  1.57e07  5.89e13  1.57e07  5.89e13  1.57e07  5.89e13 
Nonlinear Parameters  
True values  2.5  0.1  2.5  0.1  2.5  0.1  
LSEs  ALSEs  Efficient Algorithm  
0.1  Time (s)  67.1180  62.3369  9.5720  
Average  2.5000  0.1000  2.5002  0.1000  2.4984  0.1000  
Bias  8.16e07  9.15e10  1.86e04  9.30e08  1.65e03  5.34e06  
MSE  2.95e09  2.85e15  3.87e08  1.21e14  1.79e03  1.46e09  
Avar  1.96e09  1.84e15  1.96e09  1.84e15  1.96e09  1.84e15  
0.5  Time (s)  61.2009  56.3849  8.2260  
Average  2.5000  0.1000  2.5002  0.1000  2.4981  0.1000  
Bias  1.80e06  1.67e09  1.86e04  9.24e08  1.87e03  4.21e06  
MSE  1.57e08  1.55e14  5.40e08  2.60e14  1.58e03  1.32e09  
Avar  9.81e09  9.20e15  9.81e09  9.20e15  9.81e09  9.20e15  
1  Time (s)  62.5589  56.3840  8.2129  
Average  2.5000  0.1000  2.5002  0.1000  2.4948  0.1000  
Bias  3.32e06  8.67e10  1.88e04  9.19e08  5.20e03  6.51e06  
MSE  3.10e08  2.95e14  7.41e08  4.22e14  1.40e03  1.13e09  
Avar  1.96e08  1.84e14  1.96e08  1.84e14  1.96e08  1.84e14 
We also perform simulations for the following two component model using the proposed sequential estimators:
For simulation, we take the true values as = 2, = 1.75, = 1.5, = 0.1, = 3, = 2.25, = 2.5 and = 0.2, and compute both the LSEs and the ALSEs of all the unknown parameters, sequentially. The error structure is same as that for one component simulation study.
Nonlinear Parameters  
LSEs  ALSEs  Efficient Algorithm  
True values  1.5  0.1  1.5  0.1  1.5  0.1  
0.1  Time (s)  31.918  25.5269  4.5620  
Average  1.5074  0.1000  1.5044  0.1000  1.4711  0.1001  
Bias  7.43e03  2.57e05  4.40e03  1.59e05  2.89e02  7.77e05  
MSE  5.58e05  6.70e10  2.00e05  2.62e10  1.63e02  2.25e07  
Avar  1.09e07  1.64e12  1.09e07  1.64e12  1.09e07  1.64e12  
0.5  Time (s)  32.3660  26.2630  4.5389  
Average  1.5075  0.1000  1.5045  0.1000  1.4832  0.1001  
Bias  7.51e03  2.60e05  4.48e03  1.62e05  1.68e02  1.18e04  
MSE  5.80e05  7.04e10  2.29e05  3.15e10  2.41e02  2.98e07  
Avar  5.46e07  8.19e12  5.46e07  8.19e12  5.46e07  8.19e12  
1  Time (s)  32.6730  26.5839  4.5110  
Average  1.5074  0.1000  1.5043  0.1000  1.4809  0.1001  
Bias  7.37e03  2.55e05  4.33e03  1.56e05  1.91e02  1.14e04  
MSE  5.75e05  6.98e10  2.40e05  3.40e10  3.29e02  4.15e07  
Avar  1.09e06  1.64e11  1.09e06  1.64e11  1.09e06  1.64e11  
Nonlinear Parameters  
True values  2.5  0.2  2.5  0.2  2.5  0.2  
0.1  Time (s)  31.918  25.5269  4.5620  
Average  2.4999  0.2000  2.5000  0.2000  2.4548  0.1998  
Bias  1.22e04  1.82e07  1.40e05  3.43e06  4.52e02  1.67e04  
MSE  1.96e07  2.74e12  2.02e07  1.46e11  3.44e02  3.89e07  
Avar  2.17e07  3.26e12  2.17e07  3.26e12  2.17e07  3.26e12  
0.5  Time (s)  32.3660  26.2630  4.5389  
Average  2.5000  0.2000  2.5001  0.2000  2.4744  0.1999  
Bias  2.84e05  1.71e07  9.07e05  3.82e06  2.56e02  9.02e05  
MSE  7.73e07  1.13e11  8.81e07  2.68e11  3.67e02  4.52e07  
Avar  1.09e06  1.63e11  1.09e06  1.63e11  1.09e06  1.63e11  
1  Time (s)  32.6730  26.5839  4.5110  
Average  2.4999  0.2000  2.5000  0.2000  2.4707  0.1999  
Bias  1.26e04  1.62e07  2.08e05  3.43e06  2.93e02  8.50e05  
MSE  1.82e06  2.65e11  2.03e06  4.04e11  2.61e02  3.63e07  
Avar  2.17e06  3.26e11  2.17e06  3.26e11  2.17e06  3.26e11 
Nonlinear Parameters  
LSEs  ALSEs  Efficient Algorithm  
True values  1.5  0.1  1.5  0.1  1.5  0.1  
0.1  Time (s)  61.0879  55.7359  8.4870  
Average  1.5020  0.1000  1.5011  0.1000  1.4798  0.1000  
Bias  1.98e03  4.30e06  1.13e03  2.49e06  2.02e02  4.34e05  
MSE  4.01e06  1.88e11  1.33e06  6.40e12  1.17e02  2.82e08  
Avar  1.37e08  5.12e14  1.37e08  5.12e14  1.37e08  5.12e14  
0.5  Time (s)  61.8270  55.9599  8.4100  
Average  1.5020  0.1000  1.5011  0.1000  1.4840  0.1000  
Bias  1.97e03  4.29e06  1.13e03  2.48e06  1.60e02  3.96e05  
MSE  4.12e06  1.92e11  1.49e06  7.02e12  1.19e02  3.37e08  
Avar  6.83e08  2.56e13  6.83e08  2.56e13  6.83e08  2.56e13  
1  Time (s)  63.3360  57.1080  8.7080  
Average  1.5020  0.1000  1.5011  0.1000  1.4832  0.1000  
Bias  1.99e03  4.32e06  1.14e03  2.53e06  1.68e02  2.97e05  
MSE  4.35e06  2.03e11  1.76e06  8.22e12  8.34e03  2.37e08  
Avar  1.37e07  5.12e13  1.37e07  5.12e13  1.37e07  5.12e13  
Nonlinear Parameters  
True values  2.5  0.2  2.5  0.2  2.5  0.2  
0.1  Time (s)  61.0879  55.7359  8.4870  
Average  2.4999  0.2000  2.4987  0.2000  2.4861  0.1999  
Bias  5.39e05  1.35e08  1.26e03  2.13e06  1.39e02  7.72e05  
MSE  2.32e08  7.37e14  1.61e06  4.66e12  3.04e03  1.66e08  
Avar  2.72e08  1.02e13  2.72e08  1.02e13  2.72e08  1.02e13  
0.5  Time (s)  61.8270  55.9599  8.4100  
Average  2.5000  0.2000  2.4988  0.2000  2.5017  0.2000  
Bias  4.13e05  1.44e08  1.24e03  2.09e06  1.65e03  4.06e05  
MSE  9.53e08  3.53e13  1.67e06  4.89e12  2.89e03  1.28e08  
Avar  1.36e07  5.10e13  1.36e07  5.10e13  1.36e07  5.10e13  
1  Time (s)  63.3360  57.1080  8.7080  
Average  2.5000  0.2000  2.4988  0.2000  2.5102  0.2000  
Bias  3.38e05  1.78e08  1.23e03  2.08e06  1.02e02  1.78e05  
MSE  1.98e07  7.29e13  1.78e06  5.37e12  4.49e03  1.55e08  
Avar  2.72e07  1.02e12  2.72e07  1.02e12  2.72e07  1.02e12 
This process of data generation and estimation of the unknown parameters is replicated 1000 times and we calculate the average values, bias and MSEs of these estimates. We also report the time taken for the entire simulation process by each of the estimation methods. We compute the asymptotic variance of the estimates to compare the MSEs with them. Simulation results provided in tables 4, 5 and 6, for the two component model, show that the MSEs of the proposed sequential estimators are well matched to the MSEs of LSEs and they become close as increases. Also they are comparable to the asymptotic variance of the LSEs. In many cases, it is observed that the MSEs of the ALSEs of the first component, are smaller than the corresponding LSEs. In all the tables, it is consistently observed that compared to the LSEs, computation of the ALSEs takes lesser time.
Nonlinear Parameters  
LSEs  ALSEs  Efficient Algorithm  
True values  1.5  0.1  1.5  0.1  1.5  0.1  
0.1  Time (s)  124.913  114.535  16.7209  
Average  1.4999  0.1000  1.5002  0.1000  1.5206  0.1000  
Bias  7.22e05  1.91e09  1.87e04  3.52e07  2.06e02  1.68e05  
MSE  1.02e08  5.16e15  4.17e08  1.30e13  7.69e03  5.54e09  
Avar  1.71e09  1.60e15  1.71e09  1.60e15  1.71e09  1.60e15  
0.5  Time (s)  118.263  115.557  16.3389  
Average  1.4999  0.1000  1.5002  0.1000  1.5168  0.1000  
Bias  7.51e05  1.25e09  1.97e04  3.61e07  1.68e02  1.20e05  
MSE  2.91e08  2.47e14  6.41e08  1.54e13  2.32e03  1.73e09  
Avar  8.53e09  8.00e15  8.53e09  8.00e15  8.53e09  8.00e15  
1  Time (s)  118.7809  114.4330  16.3170  
Average  1.4999  0.1000  1.5002  0.1000  1.5111  0.1000  
Bias  7.32e05  7.49e10  2.04e04  3.66e07  1.11e02  6.34e06  
MSE  5.45e08  5.07e14  9.13e08  1.81e13  1.18e03  8.62e10  
Avar  1.71e08  1.60e14  1.71e08  1.60e14  1.71e08  1.60e14  
Nonlinear Parameters  
True values  2.5  0.2  2.5  0.2  2.5  0.2  
0.1  Time (s)  124.913  114.535  16.7209  
Average  2.5000  0.2000  2.4998  0.2000  2.4958  0.2000  
Bias  2.29e05  1.69e08  2.44e04  3.25e07  4.15e03  1.30e06  
MSE  3.32e09  2.90e15  6.29e08  1.09e13  9.38e04  6.46e10  
Avar  3.40e09  3.19e15  3.40e09  3.19e15  3.40e09  3.19e15  
0.5  Time (s)  118.263  115.557  16.3389  
Average  2.5000  0.2000  2.4998  0.2000  2.4979  0.2000  
Bias  2.08e05  1.40e08  2.45e04  3.26e07  2.13e03  6.78e07  
MSE  1.43e08  1.31e14  7.47e08  1.20e13  6.51e04  5.28e10  
Avar  1.70e08  1.59e14  1.70e08  1.59e14  1.70e08  1.59e14  
1  Time (s)  118.7809  114.4330  16.3170  
Average  2.5000  0.2000  2.4997  0.2000  2.5007  0.2000  
Bias  1.01e05  3.41e09  2.55e04  3.37e07  7.45e04  2.89e06  
MSE  2.81e08  2.65e14  9.42e08  1.41e13  1.43e03  8.21e10  
Avar  3.40e08  3.19e14  3.40e08  3.19e14  3.40e08  3.19e14 
It is observed that the estimates of the unknown nonlinear parameters of the second component for the two component model, that is of , , have very small bias as compared to those obtained at the first stage, that is of , or those obtained for the one component model, , . Since the proposed ALSEs have desirable properties, it is a good idea to obtain the initial estimates by maximising the periodogramlike function as defined in (4) and then carry out the least squares estimation.
5 Real Data Analysis
For illustration, we perform analysis of two speech signal data sets "AHH" and "AAA". These data have been obtained from a sound instrument at the Speech Signal Processing laboratory of the Indian Institute of Technology Kanpur. We have 469 data points in the "AHH" signal data set and 477 data points in the "AAA" signal data set, both sampled at 10 kHz frequency. Figure 1 gives the plot of the observed signal "AHH" and Figure 2 gives the plot of the observed signal "AAA".
We try to fit a multiple component chirp model to both the data sets, using the proposed sequential estimation procedure which computes ALSEs at each stage. At the same time, we compute the sequential LSEs as proposed by Lahiri et al., [2015] for comparison purposes. To find the initial values of the frequency and frequency rate, at each stage we maximize the periodogramlike function, over a fine grid: , = 1, 2, , , = 1, 2, , . For the estimation of the number of components, we use the following form of BIC:
The model order is estimated as the value of for which the BIC is minimum. For the "AHH" data, when we estimate the parameters using sequential least squares estimation procedure, it is evident from Figure 3 that the number of components that fits this data is 8. Using the proposed sequential ALSEs to fit the model also gives the same estimated number of components which can be seen in Figure 4. The number of components when we estimate the parameters of the "AAA" data, using sequential least squares estimation procedure is 9, as can be seen from Figure 5. The proposed sequential ALSEs also give the same estimated number of components which can be seen in Figure 6.
Figure 7 and Figure 8 gives the observed as well as the fitted signal for the "AHH" data, estimated using the sequential LSEs and using the sequential ALSEs, respectively. We observe from these plots that both the fits look similar. Hence we may conclude from here as well, that the ALSEs are equivalent to the LSEs. Figure 9 and Figure 10 give the observed as well as the fitted signal for the "AAA" data, estimated using the sequential LSEs and using the sequential ALSEs, respectively.
We analyze the residuals by performing an augmented DickeyFuller (ADF) test and KwiatkowskiPhillipsSchmidtShin (KPSS) test to check for their stationarity. This is done using inbuilt Rfunctions "adf.test" and "kpss.test" in "tseries" package in R. ADF test, tests the null hypothesis of unitroot being present in the time series against the alternative of no unit root, that is, stationarity and KPSS test is used for testing a null hypothesis that an observable time series is stationary around a deterministic trend against the alternative of a unit root. For the "AHH" data set, in the ADF test, we reject the null hypothesis and in KPSS test we do not reject the null hypothesis, and thereby from results of both the tests, we conclude that the residuals are stationary. For the "AAA" data set, in the ADF test, we reject the null hypothesis and in KPSS test we do not reject the null hypothesis, and thereby from results of both the tests, we conclude that the residuals are stationary. Figure
11  14 provide the residual plots for the two data sets under the two sequential procedures.6 Conclusion
In this paper, we proposed periodogramtype estimators, called the approximate least squares estimators (ALSEs), for the parameters of a onedimensional one component chirp model and studied their asymptotic properties. We showed that they are consistent and asymptotically equivalent to the LSEs. Also we obtained the consistency of the ALSEs under weaker conditions than those required for the LSEs. For the multiple component chirp model, we proposed a sequential procedure based on calculating ALSEs and at each step of the sequential procedure establish that these are strongly consistent and asymptotically equivalent to the corresponding sequential LSEs, having the same rates of convergence. Simulation studies presented in the paper also confirm this large sample equivalence. Hence one may use the periodogramlike estimators as the initial values to find the LSEs. We also perform analysis of two speech signal data sets for illustrative purposes and the performances are quite satisfactory.
Acknowledgement
The authors would like to thank two unknown reviewers and the associate editor for their constructive comments which have helped to improve the manuscript significantly.
Appendix A
The following lemmas are required to prove Theorem 1.
Lemma 1.
If {X(t)} satisfies Assumption 1, then:

[label=()]

,

.
Proof.
Refer to Kundu and Nandi [2008].
∎
Lemma 2.
If , then except for a countable number of points, the following results are true:

[label=()]

,

,

,

,

.
for all k = 0, 1, 2,
Proof.
Refer to Lahiri et al., [2015].
∎
Lemma 3.
Suppose and are the ALSEs of and , respectively. Let us denote and
If there exists a 0, such that
(13) 
then almost surely. Here is as defined in (4).
Proof.
Let us denote by = and by to emphasize that they depend on . Suppose (13) is true and does not converge to as Then there exists a 0 such that
Hence, a 0 and a subsequence of such that for all = 1, 2, , that is for all = 1, 2, . Since is the ALSE of when , it maximises ,
Thus, we have which contradicts (13). Hence, the result follows. ∎
Lemma 4.
Suppose and are the ALSEs of and , respectively. Let us define = and = diag(), then
Proof.
Let us denote as the 1 2 first derivative vector, that is, and as the 2 2 second derivative matrix of , that is,
Using multivariate Taylor series expansion of around , we get:
(14) 
where is such that Since = 0, (14) can be rewritten as the following:
Let us first consider,
Using Lemmas 1 and 2, it can be shown that:
Thus we have, 0. Now, consider the 2 2 matrix . Since and is a continuous function of ,
Comments
There are no comments yet.