 # Poisson Source Localization on the Plane. Smooth Case

We consider the problem of localization of Poisson source by the observations of inhomogeneous Poisson processes. We suppose that there are k detectors on the plane and each detector provides the observations of Poisson processes whose intensity functions depend on the position of the emitter. We describe the properties of the maximum likelihood and Bayesian estimators. We show that under regularity conditions these estimators are consistent, asymptotically normal and asymptotically efficient. Then we propose some simple consistent estimators and this estimators are further used to construct asymptotically efficient One-step MLE-process.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider the problem of estimation of the position of a source emitting Poisson signals which are received by distributed on the plane sensors . We suppose that the source starts emission at the instant and the -th sensor receives the data, which can be described as inhomogeneous Poisson process , whose intensity function

increases at the moment

of arriving the signal. Here is the intensity of the Poisson noise and is the time needed for the signal to arrive at the -th detector. For the -th detector localized at the point we have , where is the known rate of propagation of the signal and is Euclidean norm on the plane. We suppose that for . Therefore we have independent inhomogeneous Poisson processes with intensities depending on . We suppose that the position of the source is unknown and we have to estimate by the observations . Here is a convex bounded set.

Note that the same mathematical model we have in the problem of GPS-localization on the plane . Indeed, in this case we have emitters with known positions and an object which receives these signals and has to estimate its own position. Therefore, we have observations of inhomogeneous Poisson processes with the intensity functions depending on the position of the object and we have to estimate the coordinates of this object.

Due to importance of such type of models in many applied problems there exists a wide literature devoted to the different algorithms of localization (see the introduction in the work  and the references there in). It seems that the mathematical study of this class of models was not yet sufficiently developed. The statistical models of inhomogeneous Poisson processes with intensity functions having discontinuities along some curves depending on unknown parameters were considered in , Sections 5.2 and 5.3. Statistical inference for point processes can be found in the works ,  and .

We are interested in the models of observations which allow the estimation with the small errors: . As usual in such situations as we said “small error” we have to consider some asymptotic statement. The small errors can be obtained, for example, if the intensity of the signal takes large values or we have periodical Poisson process. Another possibility is to have many sensors. We take the model with large intensity functions , which can be written as follows

 λj,n(ϑ0,t)=nλj(t−τj)+nλ0,0≤t≤T

or in equivalent form

Here is a “large parameter” and we study estimators as . For example, such model we can be obtained if we have clusters and in each cluster we have detectors.

The likelihood ratio function is

 lnL(ϑ,Xn) =k∑j=1∫Tτjln(1+λj(t−τj)λ0)dXj(t)−nk∑j=1∫Tτjλj(t−τj)dt.

Here and are counting processes from detectors. Having this likelihood ratio formula we define the maximum likelihood estimator (MLE) and Bayesian estimator (BE) by the “usual” relations

 L(^ϑn,Xn)=supϑ∈ΘL(ϑ,Xn) (1)

and

 ~ϑn=∫Θϑp(ϑ)L(ϑ,Xn)dϑ∫Θp(ϑ)L(ϑ,Xn)dϑ. (2)

Here is the prior density. We suppose that it is positive, continuous function on . If the equation (1) has more than one solution then any of these solutions can be taken as the MLE. In the section 3 we consider another consistent estimator.

There are several types of statistical problems depending on the regularity of the function . In particularly, the rate of convergence of the mean square error of the estimator is

 Eϑ0(¯ϑn−ϑ0)2=Cnγ(1+o(1)),

where the parameter depends on the regularity of the function .

Let us recall three of them using the following intensity functions

 (3)

We suppose that and known, the set is such that for all the instants .

Here is Fig. 1 Figure 1: Intensity functions: a) κ=58, b) κ=12, c) κ=18, d) κ=0, e) κ=−38.
a) Smooth case.

Suppose that the , then the problem of parameter estimation is regular, the estimators are asymptotically normal and

 Eϑ0∥∥~ϑn−ϑ0∥∥2=Cn(1+o(1)),γ=1.
b) Smooth case.

If , then

 Eϑ0∥∥~ϑn−ϑ0∥∥2=Cnlnn(1+o(1)).
c ) Cusp-type case.

This case is intermediate between the smooth and change-point cases. Suppose that . Then

d) Change point case.

Suppose that . Then

 Eϑ0∥∥~ϑn−ϑ0∥∥2=Cn2(1+o(1)),γ=2.
e ) Explosion case.

Suppose that . Then

The smooth case a) is studied in this work. See as well the work , where the similar model was considered. The case b) is discussed below in the Section 4. For the Cusp case c) see , . The change-point case d) is studied in . The Explosion case d) can be done using the technique developed in .

## 2 Main result

Suppose that there exists a source at some point and sensors (detectors) on the same plane located at the points . The source was activated at the (known) instant and the signals from the source (inhomogeneous Poisson processes) are registered by all detectors. The signal arrives at the -th detector at the instant . Of course, is the time necessary for the signal to arrive in the -th detector defined by the relation

 τj(ϑ0)=ν−1∥∥ϑj−ϑ0∥∥,

where is the known speed of propagation of the signal and is the Euclidean norm (distance) in .

The intensity function of the Poisson process registered by the -th detector is

 λj(ϑ,t)=nλj(t−τj)+nλ0,0≤t≤T.

Here is the intensity function of the signal and is the intensity of the noise. For simplicity of the exposition we suppose that the noise level in all detectors is the same.

Introduce the notations:

 αj =infϑ∈Θτj(ϑ),βj=supϑ∈Θτj(ϑ),j=1,…,k Jj(ϑ) =1ν2∥∥ϑj−ϑ∥∥2∫Tτj(ϑ)λ′j(t−τj(ϑ))2λj(t−τj(ϑ))+λ0dt, ⟨a,b⟩ϑ

Recall that for and note that and are formally the scalar product and the norm in

of the vectors

with weights but the both depend on by a very special way. The Fisher information matrix , where and

Here and etc.

Further, we suppose that and that the functions are defined on the sets .

Regularity conditions .
. For all the functions

 λj(t)=0,t∈[−βj,0],andλj(t)>0,t∈(0,T−αj]

. The functions have two continuous derivatives and .
. The Fisher information matrix is uniformly non degenerate

 κ1=infϑ∈Θinf|e|=1eTI(ϑ)e>0.

. There are at least three detectors which are not on the same line.

Remark, that if all detectors are on the same line, then the consistent identification is impossible because the same signals come from the symmetric with respect to this line possible locations of the source.

According to Lemma 1 below the family of measures induced by the Poisson processes in the space of their realizations is locally asymptotically normal and therefore we have the following minimax Hajek-Le Cam’s lower boud on the mean square errors of all estimators : for any

 limδ→0lim––––n→∞sup∥ϑ−ϑ0∥≥δnEϑ∥∥¯ϑn−ϑ∥∥2≥Eϑ0∥ζ∥2,ζ∼N(0,I(ϑ0)−1).

We call the estimator asymptotically efficient, if for all we have the equality

 limδ→0limn→∞sup∥ϑ−ϑ0∥≥δnEϑ∥∥¯ϑn−ϑ∥∥2=Eϑ0∥ζ∥2.

For the proof of this bound see, e.g., , Theorem 2.12.1.

###### Theorem 1

Let the conditions be fulfilled then the MLE and BE are uniformly consistent, asymptotically normal

for any

 limn→∞np2Eϑ0∥∥^ϑn−ϑ0∥∥p=Eϑ0∥ζ∥p,limn→∞np2Eϑ0∥∥~ϑn−ϑ0∥∥p=Eϑ0∥ζ∥p,

where and the both estimators are asymptotically efficient.

Proof. The proof of this theorem is based on two general results by Ibragimov and Khasminskii  presented in the Theorems 1.10.1 and 1.10.2. We have to check the conditions of these theorems given in terms of normalized likelihood ratio

 Zn(u)=L(ϑ0+u√n,Xn)L(ϑ0,Xn),u∈Un={u:ϑ0+u√n∈Θ}.

Introduce the limit likelihood ratio

 Z(u)=exp{⟨u,Δ(ϑ0)⟩−12uTI(ϑ0)u},u∈R2.

Here .

Suppose that we already proved the weak convergence

 Zn(⋅)⟹Z(⋅).

Then the limit distributions of the mentioned estimators are obtained as follows (see ). Below we change the variables and is a bounded set.

For the MLE we have

 Pϑ0(√n(^ϑn−ϑ0)∈B) =Pϑ0{sup√n(ϑ−ϑ0)∈BL(ϑ,XT)>sup√n(ϑ−ϑ0)∈BcL(ϑ,XT)} =Pϑ0{sup√n(ϑ−ϑ0)∈BL(ϑ,XT)L(ϑ0,XT)>sup√n(ϑ−ϑ0)∈BcL(ϑ,XT)L(ϑ0,XT)} =Pϑ0{supu∈B,u∈UnZn(u)>supu∈Bc,u∈UnZn(u)} ⟶Pϑ0{supu∈BZ(u)>supu∈BcZ(u)}=Pϑ0(ζ∈B).

It is easy to see that

For the BE we have (once more we change the variables ):

 ~ϑn =∫Θθp(θ)L(θ,XT)dθ∫Θp(θ)L(θ,XT)dθ=ϑ0+1√n∫Unup(θu)L(θu,XT)du∫Unp(θu)L(θu,XT)du =ϑ0+1√n∫Unup(θu)Zn(u)du∫Unp(θu)Zn(u)du.

Hence

 √n(~ϑn−ϑ0)=∫Unup(θu)Zn(u)du∫Unp(θu)Zn(u)du⟹∫R2uZ(u)du∫R2Z(u)du=ζ.

Recall that and note that

 ∫R2uZ(u)du =ζ∫R2Z(u)du.

The properties of the required in the Theorems 1.10.1 and 1.10.2  are checked in the three lemmas below. Remind that this approach to the study of the properties of these estimators was applied in , . Here we use some obtained there inequalities.

Introduce the vector of partial derivatives

 Δn(ϑ0,Xn)=1√n(∂lnL(ϑ0,Xn)∂x0,∂lnL(ϑ0,Xn)∂y0)T.

The convergence of finite dimensional distributions of the random field to the finite dimensional distributions of the limit random field follows from the Lemma 1 below.

###### Lemma 1

Let the conditions be fulfilled, then the family of measures is locally asymptotically normal (LAN), i.e., the random process for any admits the representation

 Zn(u)=exp{⟨u,Δn(ϑ0,Xn)⟩−12uTI(ϑ0)u+rn},u∈Un, (4)

where the vector

 Δn(ϑ0,Xn)⟹Δ(ϑ0)∼N(0,I(ϑ0)) (5)

and .

Proof. Let us denote and put . Then we can write

 lnZn(u)=k∑j=1∫T0ln(λj(t,u)+λ0)λj(t,0)+λ0)dπj,n(t) −nk∑j=1∫T0[λj(t,u)+λ0λj(t,0)+λ0−1−ln(λj(t,u)+λ0λj(t,0)+λ0)][λj(t,0)+λ0]dt.

Using the Taylor formula we obtain the relations

 τj(ϑ0+u√n)=τj(ϑ0)−1ν√n⟨mj,u⟩+O(1n), mj=(xj−x0ρj,yj−y0ρj),∥∥mj∥∥=1, λj(t−τj(ϑ0+n−1/2u))−λj(t−τj(ϑ0)) =−n−1/2λ′j(t−τj(ϑ0))⟨u,∂τ(ϑ0)∂ϑ⟩+n−1O(∥u∥2) =n−1/2ν−1λ′j(t−τj(ϑ0))⟨mj,u⟩+n−1O(∥u∥2), ln(λj(t,u)+λ0)λj(t,0)+λ0)=λ′j(t−τj(ϑ0))√n[λj(t−τj(ϑ0))+λ0](u,∂τ(ϑ0)∂ϑ)+O(1n), =λ′j(t−τj(ϑ0))ν√n[λj(t−τj(ϑ0))+λ0]⟨mj,u⟩+O(1n), λj(t,u)+λ0λj(t,0)+λ0−1−ln(λj(t,u)+λ0λj(t,0)+λ0) =12nλ′j(t−τj(ϑ0))2[λj(t−τj(ϑ0))+λ0]2(u,∂τj(ϑ0)∂ϑ)2+O(1n3/2) =12nν2λ′j(t−τj(ϑ0))2[λj(t−τj(ϑ0))+λ0]2⟨mj,u⟩2+O(1n3/2).

Note that

 ∂τj(ϑ0)∂x0=−xj−x0ν∥∥ϑj−ϑ0∥∥,∂τj(ϑ0)∂y0=−yj−y0ν∥∥ϑj−ϑ0∥∥

Therefore we can write

 ∂lnL(ϑ0,Xn)∂x0=k∑j=1(xj−x0)ν∥∥ϑj−ϑ0∥∥∫Tτj(ϑ0)λ′j(t−τj(ϑ0))λj(t−τj(ϑ0))+λ0dπj,n(t).

Hence

and

 Eϑ0[∂lnL(ϑ0,Xn)∂x0∂lnL(ϑ0,Xn)∂y0]=nk∑j=1(xj−x0)(yj−y0)Jj(ϑ0).

These equalities justify the introduced above form of the Fisher information matrix .

We have the representations

 k∑j=1∫T0ln(λj(t,u)+λ0)λj(t,0)+λ0)dπj,n(t) =1ν√nk∑j=1⟨mj,u⟩∫Tτj(ϑ0)λ′j(t−τj(ϑ0))λj(t−τj(ϑ0))+λ0dπj,n(t)+o(1), =⟨u,Δn(ϑ0,Xn)⟩+o(1),

and

 nk∑j=1∫T0[λj(t,u)+λ0λj(t,0)+λ0−1−ln(λj(t,u)+λ0λj(t,0)+λ0)][λj(t,0)+λ0]dt =12ν2k∑j=1⟨mj,u⟩2∫Tτj(ϑ0)λ′j(t−τj(ϑ0))2λj(t−τj(ϑ0))+λ0dt+o(1) =12uTI(ϑ0)u+o(1).

Therefore we obtained (4). To verify the convergence (5) we introduce the vector , where

 I1,n =1√nk∑j=1aj∫Tτj(ϑ0)λ′j(t−τj(ϑ0))λj(t−τj(ϑ0))+λ0dπj,n(t), I2,n =1√nk∑j=1bj∫Tτj(ϑ0)λ′j(t−τj(ϑ0))λj(t−τj(ϑ0))+λ0dπj,n(t),

where the vectors . Then the asymptotic normality of

follows from the central limit theorem for stochastic integrals. See, e.g., Theorem 1.1 in

. Moreover, we have

 I1,n⟹k∑j=1aj∫Tτj(ϑ0)λ′j(t−τj(ϑ0))λj(t−τj(ϑ0))+λ0dWj(Λ(ϑ0,t)),

where are independent Wiener processes and

The conditions of this theorem can be easily verified for the corresponding vectors given by the presentation .

###### Lemma 2

Let the condition be fulfilled, then there exists a constant , which does not depend on such that for any

 supϑ0∈Θsup∥u1∥+∥u2∥≤R∥u1−u2∥−4Eϑ0∣∣∣Z14n(u1)−Z14n(u2)∣∣∣4≤C(1+R2). (6)

Proof. The proof of this lemma follows from the proof of the Lemma 2.2 in  if we put there . The difference between the models of observations there and here is not essential for the presented there proof.

###### Lemma 3

Let the conditions be fulfilled, then there exists a constant , which does not depend on such that

 supϑ0∈ΘEϑ0Z12n(u)≤e−κ∥u∥2. (7)

Proof. Let us denote and put

 Zj,n(u) =exp{∫T0ln(λj,n(θu,t)λj,n(ϑ0,t))dXj(t) −∫T0[λj,n(θu,t)−λj,n(ϑ0,t)]dt}.

Remind that (see Lemma 2.2 in )

 Eϑ0Z12j,n(u)=exp{−12∫T0[√λj,n(θu,t)−√λj,n(ϑ0,t)]2dt}

Therefore we have the equality

 Eϑ0Z12n(u) =k∏j=1Eϑ0Z12j,n(u) =exp{−12k∑j=1∫T0[√λj,n(θu,t)−√λj,n(ϑ0,t)]2dt}. (8)

By Taylor formula for we can write

 k∑j=1∫T0[√λj,n(ϑ0+h,t)−√λj,n(ϑ0,t)]2dt=n4hTI(ϑ0)h(1+O(δ)).

Hence we can take such (small) that for we have

 =18uT∥u∥I(ϑ0)u∥u∥∥u∥2≥κ18∥u∥2, (9)

where from the condition .

Let us denote

 g(δ)=1ninfϑ0∈Θinf∥ϑ−ϑ0∥>δk∑j=1∫T0[√λj,n(ϑ,t)−√λj,n(ϑ0,t)]2dt,

and show that . Remark that does not depend on . Indeed,

 1n∫T0[√λj,n(ϑ,t)−√λj,n(ϑ0,t)]2dt ≥14(λM+λ0)∫T0[λj(t−τj(ϑ))−λj(t−