We consider the problem of estimation of the position of a source emitting Poisson
signals which are received by distributed on the plane sensors . We
suppose that the source starts emission at the instant and the -th
sensor receives the data, which can be described as inhomogeneous Poisson
process , whose intensity
function increases at the moment
increases at the momentof arriving the signal. Here is the intensity of the Poisson noise and is the time needed for the signal to arrive at the -th detector. For the -th detector localized at the point we have , where is the known rate of propagation of the signal and is Euclidean norm on the plane. We suppose that for . Therefore we have independent inhomogeneous Poisson processes with intensities depending on . We suppose that the position of the source is unknown and we have to estimate by the observations . Here is a convex bounded set.
Note that the same mathematical model we have in the problem of GPS-localization on the plane . Indeed, in this case we have emitters with known positions and an object which receives these signals and has to estimate its own position. Therefore, we have observations of inhomogeneous Poisson processes with the intensity functions depending on the position of the object and we have to estimate the coordinates of this object.
Due to importance of such type of models in many applied problems there exists a wide literature devoted to the different algorithms of localization (see the introduction in the work  and the references there in). It seems that the mathematical study of this class of models was not yet sufficiently developed. The statistical models of inhomogeneous Poisson processes with intensity functions having discontinuities along some curves depending on unknown parameters were considered in , Sections 5.2 and 5.3. Statistical inference for point processes can be found in the works ,  and .
We are interested in the models of observations which allow the estimation with the small errors: . As usual in such situations as we said “small error” we have to consider some asymptotic statement. The small errors can be obtained, for example, if the intensity of the signal takes large values or we have periodical Poisson process. Another possibility is to have many sensors. We take the model with large intensity functions , which can be written as follows
or in equivalent form
Here is a “large parameter” and we study estimators as . For example, such model we can be obtained if we have clusters and in each cluster we have detectors.
The likelihood ratio function is
Here and are counting processes from detectors. Having this likelihood ratio formula we define the maximum likelihood estimator (MLE) and Bayesian estimator (BE) by the “usual” relations
Here is the prior density. We suppose that it is positive, continuous function on . If the equation (1) has more than one solution then any of these solutions can be taken as the MLE. In the section 3 we consider another consistent estimator.
There are several types of statistical problems depending on the regularity of the function . In particularly, the rate of convergence of the mean square error of the estimator is
where the parameter depends on the regularity of the function .
Let us recall three of them using the following intensity functions
We suppose that and known, the set is such that for all the instants .
Here is Fig. 1
- a) Smooth case.
Suppose that the , then the problem of parameter estimation is regular, the estimators are asymptotically normal and
- b) Smooth case.
If , then
- c ) Cusp-type case.
This case is intermediate between the smooth and change-point cases. Suppose that . Then
- d) Change point case.
Suppose that . Then
- e ) Explosion case.
Suppose that . Then
The smooth case a) is studied in this work. See as well the work , where the similar model was considered. The case b) is discussed below in the Section 4. For the Cusp case c) see , . The change-point case d) is studied in . The Explosion case d) can be done using the technique developed in .
2 Main result
Suppose that there exists a source at some point and sensors (detectors) on the same plane located at the points . The source was activated at the (known) instant and the signals from the source (inhomogeneous Poisson processes) are registered by all detectors. The signal arrives at the -th detector at the instant . Of course, is the time necessary for the signal to arrive in the -th detector defined by the relation
where is the known speed of propagation of the signal and is the Euclidean norm (distance) in .
The intensity function of the Poisson process registered by the -th detector is
Here is the intensity function of the signal and is the intensity of the noise. For simplicity of the exposition we suppose that the noise level in all detectors is the same.
Introduce the notations:
Recall that for
and note that and
are formally the scalar product and the norm in of the vectors
of the vectorswith weights but the both depend on by a very special way. The Fisher information matrix , where and
Here and etc.
Further, we suppose that and that the functions are defined on the sets .
Regularity conditions .
. For all the functions
. The functions have two continuous derivatives and .
. The Fisher information matrix is uniformly non degenerate
. There are at least three detectors which are not on the same line.
Remark, that if all detectors are on the same line, then the consistent identification is impossible because the same signals come from the symmetric with respect to this line possible locations of the source.
According to Lemma 1 below the family of measures induced by the Poisson processes in the space of their realizations is locally asymptotically normal and therefore we have the following minimax Hajek-Le Cam’s lower boud on the mean square errors of all estimators : for any
We call the estimator asymptotically efficient, if for all we have the equality
For the proof of this bound see, e.g., , Theorem 2.12.1.
Let the conditions be fulfilled then the MLE and BE are uniformly consistent, asymptotically normal
where and the both estimators are asymptotically efficient.
Proof. The proof of this theorem is based on two general results by Ibragimov and Khasminskii  presented in the Theorems 1.10.1 and 1.10.2. We have to check the conditions of these theorems given in terms of normalized likelihood ratio
Introduce the limit likelihood ratio
Suppose that we already proved the weak convergence
Then the limit distributions of the mentioned estimators are obtained as follows (see ). Below we change the variables and is a bounded set.
For the MLE we have
It is easy to see that
For the BE we have (once more we change the variables ):
Recall that and note that
The properties of the required in the Theorems 1.10.1 and 1.10.2  are checked in the three lemmas below. Remind that this approach to the study of the properties of these estimators was applied in , . Here we use some obtained there inequalities.
Introduce the vector of partial derivatives
The convergence of finite dimensional distributions of the random field to the finite dimensional distributions of the limit random field follows from the Lemma 1 below.
Let the conditions be fulfilled, then the family of measures is locally asymptotically normal (LAN), i.e., the random process for any admits the representation
where the vector
Proof. Let us denote and put . Then we can write
Using the Taylor formula we obtain the relations
Therefore we can write
These equalities justify the introduced above form of the Fisher information matrix .
We have the representations
where are independent Wiener processes and
The conditions of this theorem can be easily verified for the corresponding vectors given by the presentation .
Let the condition be fulfilled, then there exists a constant , which does not depend on such that for any
Proof. The proof of this lemma follows from the proof of the Lemma 2.2 in  if we put there . The difference between the models of observations there and here is not essential for the presented there proof.
Let the conditions be fulfilled, then there exists a constant , which does not depend on such that
Proof. Let us denote and put
Remind that (see Lemma 2.2 in )
Therefore we have the equality
By Taylor formula for we can write
Hence we can take such (small) that for we have
where from the condition .
Let us denote
and show that . Remark that does not depend on . Indeed,