# Nonparametric estimation of the incubation time distribution

We discuss nonparametric estimators of the distribution of the incubation time of a disease. The classical approach in these models is to use parametric families like Weibull, log-normal or gamma in the estimation procedure. We analyze instead the nonparametric maximum likelihood estimator (MLE) and show that, under some conditions, its rate of convergence is cube root n and that its limit behavior is given by Chernoff's distribution. We also study smooth estimates, based on the MLE. The density estimates, based on the MLE, are capable of catching finer or unexpected aspects of the density, in contrast with the classical parametric methods. R scripts are provided for the nonparametric methods.

05/09/2022

### Functionals of nonparametric maximum likelihood estimators

Nonparametric maximum likelihood estimators (MLEs) in inverse problems o...
12/11/2017

### Fast nonparametric near-maximum likelihood estimation of a mixing density

Mixture models are regularly used in density estimation applications, bu...
06/27/2021

### Nonparametric estimation of continuous DPPs with kernel methods

Determinantal Point Process (DPPs) are statistical models for repulsive ...
06/27/2020

### Bridging Parametric and Nonparametric Methods in Cognitive Diagnosis

A number of parametric and nonparametric methods for estimating cognitiv...
04/03/2018

### A spline-assisted semiparametric approach to non-parametric measurement error models

Nonparametric estimation of the probability density function of a random...
01/27/2022

### Estimation and inference for stochastic blockmodels

This paper is concerned with nonparametric estimation of the weighted st...
03/01/2015

### JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes

Markov jump processes (MJPs) are used to model a wide range of phenomena...

## 1 Introduction

In the original treatment of classical statistical inverse problems such as the current status model it was assumed that the nonparametric maximum likelihood estimator (MLE) would converge as a process at rate and in particular would be “tight”. It was also conjectured that the pointwise limit distribution would be normal ([15], [17], [13]). But it was proved in [6] that the process is not tight, does not pointwise converge at rate, and that the actual pointwise limit distribution is also not normal, but in fact given by Chernoff’s distribution (see [4] and [12]). This fact was for example noticed in [20], who refer for the result to [11], where it is also given.

On the other hand, if we consider differentiable functionals of the model, we are back in asymptotics, with normal limit distributions. Theorem 3.1 on p. 183 of [19] gives necessary and sufficient conditions for a functional to be differentiable in a very general setting. We give a short account of the (for us) relevant facts here, also summarized in [5] and [7].

We need the concept of Hellinger differentiability. Let the unknown distribution on

be contained in some class of probability measures

, which is dominated by a -finite measure . Let have density with respect to . We are interested in estimating some real-valued function of .

Let, for some , the collection with be a 1-dimensional parametric submodel which is smooth in the following sense:

 ∫[t−1(√pt−√p)−12a√p]2dμ→0as t↓0, for some a∈L2(P)

Such a submodel is called Hellinger differentiable. This property can be seen as an version of the pointwise differentiability of at (with ), with the function playing the role of the so-called score-function in classical statistics. For we have,

 limt↓0√pt−√p0t=12√p0∂∂tpt∣∣∣t=0=12(∂∂tlogpt∣∣∣t=0)√p0=12a√p0

Therefore, is also called the score function or score. The collection of scores obtained by considering all possible one-dimensional Hellinger differentiable parametric submodels, is a linear space, the tangent space at , denoted by .

In the models for inverse problems, to be considered in this paper, we work with a so-called hidden space and an observation space. All Hellinger differentiable submodels that can be formed in the observation space, together with the corresponding score functions, are induced by the Hellinger differentiable paths of densities on the hidden space, according to the following theorem:

###### Theorem 1.

Let be a class of probability measures on the hidden space .

is induced by the random vector

. Suppose that the path to satisfies

 ∫[t−1(√pt−√p)−12a√p]2dμ→0as t↓0

for some , where the superscript means that .
Let be a measurable mapping. Suppose that the induced measures and on are absolutely continuous with respect to , with densities and . Then the path is also Hellinger differentiable, satisfying

with .

For a proof, see [2]. Note that . The relation between the scores in the hidden tangent space and the induced scores is expressed by the mapping

 A:a(⋅)↦EP(a(Y)|S=⋅). (1.1)

This mapping is called the score operator. It is continuous and linear. Its range is the induced tangent space, which is contained in .

Now is pathwise differentiable at if for each Hellinger differentiable path , with corresponding score , we have

 limt↓0t−1(Θ(Pt)−Θ(P))=Θ′P(a),

where

 Θ′P:T(P)→R

is continuous and linear.

can be written in an inner product form. Since the tangent space is a subspace of the Hilbert-space , the continuous linear functional can be extended to a continuous linear functional on . By the Riesz representation theorem, to belongs a unique , called the gradient, satisfying

 ¯¯¯¯Θ′P(h)=<θP,h>P for all h∈L2(P).

One gradient is playing a special role, which is obtained by extending to the Hilbert space . Then, the extension of is unique, yielding the canonical gradient or efficient influence function . This canonical gradient is also obtained by taking the orthogonal projection of any gradient , obtained after extension of , into . Hence is the gradient with minimal norm among all gradients and we have (Pythagoras):

 ∥θP∥2P=∥~θP∥2P+∥θP−~θP∥2P.

In our censoring model, differentiability of a functional along the induced Hellinger differentiable paths in the observation space can be proved by looking at the structure of the adjoint of the score operator according to theorem 2 below, which was first proved in [19] in a more general setting, allowing for Banach space valued functions as estimand. Then the proof is slightly more elaborate.

Recall that the adjoint of a continuous linear mapping , with and Hilbert-spaces, is the unique continuous linear mapping satisfying

 H=G∀g∈G,h∈H.

The score operator from (1.1) is playing the role of . Its adjoint can be written as a conditional expectation as well. If , then:

 [A∗b](y)=EP(b(Z)|Y=y) a.e.-[P]
###### Theorem 2.

Let be a class of probability measures on the image space of the measurable transformation S. Suppose the functional can be written as with pathwise differentiable at in the hidden space, having canonical gradient .
Then is differentiable at along the collection of induced paths in the observation space obtained via Theorem 1 if and only if

 ~κ∈R(A∗), (1.2)

where is the score operator. If (1.2) holds, then the canonical gradient of and of are related by

 ~κ=A∗~θ.

We consider the following model, used for estimating the distribution of the incubation time of a disease. In this model there is an infection time

, uniformly distributed on an interval

, where (“exposure time”) has an absolutely continuous distribution function on an interval , and where is uniform on , conditionally on . Moreover there is an incubation time with an absolutely continuous distribution on an interval and a time for getting symptomatic , where . We assume that and are independent, conditionally on . Our observations consist of the pairs

 (Ei,Si),i=1,…,n.

The model is for example considered in [16], [3], [1] and [9].

We define the (convolution) density by

 qF(e,s)=e−1{F(s)−F(s−e)}=e−1∫su=(s−e)+dF(u),e>0,s∈[0,M1+M2], (1.3)

w.r.t. , which is the product of the measure of the exposure time and Lebesgue measure on , where is the upper bound for the incubation time and is the upper bound for the exposure time.

For estimating the distribution function

of the incubation time, usually parametric distributions are used, like the Weibull, log-normal or gamma distribution. However, in

[9] the nonparametric maximum likelihood estimator is used. The maximum likelihood estimator maximizes the function

 ℓ(F)=n−1n∑i=1log{F(Si)−F(Si−Ei)}=∫log{F(s)−F(s−e)}dQn(e,s) (1.4)

over all distribution functions on which satisfy , , see [9]. Here is the empirical distribution function of the pairs , . We’ll prove that, under some conditions on the underlying distributions, the MLE of the incubation time has cube root convergence and converges in distribution, after standardization, to Chernoff’s distribution, see Theorem 4.

So this is a clear example of a situation where the MLE doe not converge pointwise at rate and has similar asymptotic properties as the MLE in the interval censoring models. But deriving this is considerably more difficult than it is for the current status model and, in contrast with the proof for the current statust model, heavily relies on smooth functional theory, as will be clear from the proof of Theorem 4. But apart from this, there also exist differentiable functionals of the model of which we now give two examples.

Example 1 We can consider the “mean functional”:

 F↦∫x∈[0,M1]xdF(x)

The score operator is of the form

 [Aa](e,s)=E[a(V)|(E,S)=(e,s)]=∫v≥0,v∈(s−e,s]a(v)dF(v)F(s)−F(s−e). (1.5)

 [A∗b](v)=E[b(E,S)|V=v]=∫e>0e−1∫s∈(v,v+e)b(e,s)dsdFE(e). (1.6)

Defining

 ϕF(u)=∫y≤ua(y)dF(y),

we get the following equation for :

 [A∗Aa](v) =∫e>0e−1∫s∈(v,v+e)ϕF(s)−ϕF(s−e)F(s)−F(s−e)dsdFE(e) =v−∫xdF(x),v∈[0,M1]. (1.7)

By differentiating w.r.t. , we find that is also the solution of the following equation in :

 ∫e>0e−1[ϕ(v+e)−ϕ(v)F(v+e)−F(v)−ϕ(v)−ϕ(v−e)F(v)−F(v−e)]dFE(e)=1,v∈[0,M1]. (1.8)

The canonical gradient is in this case given by:

 ~θF(e,s)=ϕF(s)−ϕF(s−e)F(s)−F(s−e).

The solution is shown in Figure 1 for , where we chose to be a Weibull distribution function , with and , truncated on . The distribution function of the exposure time was chosen to be the uniform distribution function on (these distributions were also used in the simulations in [9]).

This leads us to expect the following asymptotic normality result:

 √n{∫xd^Fn(x)−∫xdF0(x)}D⟶N(0,σ2), (1.9)

where

is a normal distribution with mean zero and variance

 σ2=∥∥~θF0∥∥2Q=−∫M10ϕF0(x)dx.

In fact, using , we get:

 ∥∥~θF0∥∥2Q=⟨Aa,~θF0⟩2Q=⟨a,A∗~θF0⟩F0=⟨a,~κF0⟩F0=∫a(x)~κF0(x)dF0(x) =∫M1u=0~κ′F0(u){ϕF0(M1)−ϕF0(u)}du=−∫M1u=0ϕF0(u)du.

Example 2
We can also apply the theory to estimators which converge at a lower speed. For example, if we want to estimate the density w.r.t. Lebesgue measure at a point by a kernel estimator in the model discussed in Example 1, equation (1) is replaced by:

 [A∗b](v) =∫e>0e−1∫s∈(v,v+e)ϕ(s)−ϕ(s−e)F(s)−F(s−e)dsdFE(e) =Kh(t−v)−∫Kh(t−y)dF(y),v∈[0,M1], (1.10)

which becomes after differentiation w.r.t. :

 ∫e>0e−1[ϕ(v+e)−ϕ(v)F(v+e)−F(v)−ϕ(v)−ϕ(v−e)F(v)−F(v−e)]dFE(e)=−K′h(t−v),v∈[0,M1], (1.11)

where and is a symmetric kernel with support , for example the triweight kernel

 K(x)=3532(1−x2)31[−1,1](x). (1.12)

This time, the solution is shown in Figure 2.

If one would keep the bandwidth fixed, one would indeed get convergence again, but usually one would let the bandwidth tend to zero in such a way that the squared bias and variance are of the same order. This would in this case mean that one takes of order , if is the sample size, which would give a rate of convergence of order for the estimator itself, see [9]. Note that in [9] the right-hand side of the equation is instead of which leads to a picture which is flipped around w.r.t. the -axis. But this makes no difference for the estimate of the variance or the asympotic distribution result.

This leads us to expect the following asymptotic normality result:

 n2/7{∫Khn(t−y)d^Fn(y)−∫Khn(t−y)dF0(y)}D⟶N(0,σ2),

where

 σ2=limn→∞n−3/7∥∥~θt,hn,F0∥∥2Q,limn→∞n1/7hn=c>0,

and is the canonical gradient in the observation space if we evaluate the density estimate at point and use bandwidth , see [9]. As in Example 1, also has the representation:

 σ2=limn→∞n−3/7∫M1u=0ϕn(u)K′h(t−u)du,

where solves (1.11) for , for some .

The organization of the paper is as follows. In Section 2 we give necessary a sufficient conditions for to be the MLE. Under some extra conditions, we derive consistency of the MLE in Section 3.

In Section 4 we discuss the limit distribution of the MLE. To our knowledge, this result has not been derived before. It is somewhat analogous to the methods, used in [7] for deriving the asymptotic distribution of the MLE for the case of interval censoring, case 2 in Section 4.2 of [7]. Although about 25 years have passed now since the publication of this proof, and although it would be nice to have a simpler proof of this result, no other proofs are known to me. So we have to go through similar but still somewhat different steps again.

In Section 5 we discuss the behavior of smooth estimates of the distribution function and density, based on the nonparametric MLE. We end with some concluding remarks in Section 6. The Appendis, Section 7, contains technical details of the proof of the convergence of the MLE to Chernoff’s distribution.

## 2 Characterization of the nonparametric maximum likelihood estimator (MLE)

Let be the empirical distribution function of the pairs . Then, for a distribution function on , which is zero on , we define the process

 Wn,F(t)=∫{1−{s−e

defining , where is the empirical distribution of . The following lemma characterizes the MLE.

###### Lemma 1.

Let be the set of discrete distribution functions with mass concentrated on a set of points , , where and . Then maximizes

 ℓ(F)=∫log{F(s)−F(s−e)}dQn(e,s) (2.2)

over if and only if

1.  Wn,^Fn(Tj)≥0,j=1,…,m,
2.  Wn,^Fn(Tj)=0,\rm\,\,if Tj\rm% \,\,is a point of strictly positive mass of^Fn.
###### Proof.

First suppose satisfies (i) and (ii) and let , , where . Then we have, using the concavity of the log function and Jensen’s inequality:

 ℓ(F)−ℓ(^Fn) ≤∫F(s)−F(s−e)−{^Fn(s)−^Fn(s−e)}^Fn(s)−^Fn(s−e)dQn(e,s) =∫∫t∈(s−e,s]dF(t)−∫t∈(s−e,s]d^Fn(t)^Fn(s)−^Fn(s−e)dQn(e,s) =∫{1−{s−e

using (ii) and next (i) on the last line.

Conversely, suppoe maximizes over . If we must have , since otherwise . We have:

 limε↓0ε−1{ℓ((^Fn+ε1[Tj,∞))/(1+ε))−ℓ(^Fn)} =∫{s−e

If has mass at , we also have:

 limε↓0ε−1{ℓ((^Fn−ε1[Tj,∞))/(1−ε))−ℓ(^Fn)}=−∫{s−e

and hence:

 Wn,^Fn(Tj)=∫{s−e

The lemma shows that the point process where runs through the ordered points and , excluding , has second coordinates equal to

at points where the probability distribution, corresponding to

, has positive mass. A picture of this point process is given in Figure 3 for sample size .

## 3 Consistency of the MLE

We have the following result.

###### Theorem 3.

Let have a strictly positive density on , for some . Furthermore, let be zero on an interval , where and have a strictly positive continuous density on the interval . Let be the MLE, where is the set of distribution function with mass at the set of points , where the run through the ordered set of points and , excluding the points . Then the MLE converges almost surely to on .

There are a lot of different ways to prove consistency, but we feel a preference for the elegant method in [14], which is used in the proof below.

###### Proof.

We start by observing that, by the fact that is the MLE, we must have:

 ∫F0(s)−F0(s−e)^Fn(s)−^Fn(s−e)dQn(e,s)≤1.

On a set of probability one, the empirical probability measure converges weakly to the underlying measure on a set of elements which has probability one. Fixing and we get by the Helly compactness theorem a subsequence converging vaguely to a subdistribution function , for which we get the inequality:

 ∫e−1{∫{F0(s)−F0(s−e)}2F(s)−F(s−e)ds}dFE(e)≤1. (3.1)

The minimum of

 ∫e−1{∫{F0(s)−F0(s−e)}2G(s)−G(s−e)ds}dFE(e) (3.2)

over subdistribution functions is attained by a nondegenerate distribution function , since otherwise (3.2) could be made smaller by multiplying by a constant bigger than 1. This means that we may assume that the minimizer of (3.2) satisfies

 ∫e−1{∫{G(s)−G(s−e)}ds}dFE(e)=1, (3.3)

Minimizing (3.2) under the condition (3.3) is the same as minimizing

 ∫e−1∫{{F0(s)−F0(s−e)}2G(s)−G(s−e)+G(s)−G(s−e)}dsdFE(e),

without this condition, using a Lagrange multiplier argument (with Lagrange multiplier ).

For we have for and the minimum of

 F0(s)2x+x,x>0,

is attained by taking . If , we find that

 {F0(s)−F0(s−e)}2y−x+y−x,x≥0,y−x>0,

is minimized by taking , but since the minimizing values on the interval are equal , we must have and for the minimizing function .

So the minimum of (3.2) is equal to and attained for . This means that the limit the subsequence must be equal to , since otherwise the left-hand side of (3.1) would be strictly bigger than . Since this holds for all subsequences , the result now follows. ∎

## 4 Asymptotic distribution of the MLE

In this section we discuss the proof of the theorem below.

###### Theorem 4.

Let have a continuous density , staying away from zero on its support , , and let the exposure time have a continuous density on its support , for some , with a bounded derivative on the interval . Let be the MLE, where the set of distribution functions has the same meaning as in Theorem 3. Then we have at a point :

 n1/3{^Fn(t0)−F0(t0)}/(4f0(t0)/cE)1/3d⟶\rm argmin{W(t)+t2}, (4.1)

where is two-sided Brownian motion on , originating from zero and where the constant is given by:

 cE=∫e−1[1F0(t0)−F0(t0−e)+1F0(t0+e)−F0(t0)]dFE(e), (4.2)

The result shows that the limit distribution is given by Chernoff’s distribution. This distribution also occurs as limit distribution in the current status model and more generally in the interval censoring model in the so-called separated case, The jump in difficulty of the proof in going from the result for the current status model to the interval censoring, case 2, model is considerable. Similarly the present proof is not simple. The proofs of the lemma’s will be given in the Appendix.

We have the following lemma.

###### Lemma 2.

Let, under the conditions of Theorem 4, the process be defined by:

 Xn(t) =∫{s−e

Let be an interior point of the support of . Then converges in distribution, in the Skorohod topology, to the process

 t↦√cEW(t),t∈R,

where and are the same as in Theorem 4.

We also have the following property of the MLE.

###### Lemma 3.

Let the set be as in Lemma 1 and let . We define the weight process by:

 Gn(t)=∫s≤t{{^Fn(s)−^Fn(s−e)>0}{^Fn(s)−^Fn(s−e)}2+{^Fn(s+e)−^Fn(s)>0}{^Fn(s+e)−^Fn(s)}2}dQn(s,e), (4.3)

where, as usual, . If maximizes in (1.4) then is for all the left-continuous slope of the greatest convex minorant of the “self-induced” cusum diagram, consisting of the point and the points

 (Gn(Ti),∫u∈[0,Ti]^Fn(u−)dGn(u)+Wn,^Fn(Ti)),i=1,…,m. (4.4)

where is defined by (2.1).

As explained in [9], after a preliminary reduction, removing points where we know that the MLE can only be equal to or , one can compute the MLE by the iterative convex minorant algorithm, where one computes iteratively the greatest convex minorant of the cusum diagram with points and points

 (Gn,F(t),∫u∈[0,t]F(u−)dGn,F(u)+Wn,F(t)), (4.5)

where is defined as in (4.3), but with replaced by , where is the temporary estimate of the distribution function at an iteration, and where runs through the order statistics of the observations and (excluding zero). The MLE corresponds to a stationary point of this algorithm and is given by the left-continuous slope of the greatest convex minorant of the cusum diagram, see Figure 4. See [9] for further remarks on this algorithm.

A fundamental tool in our proof is the so-called “switch relation”, see, e.g., Section 3.8 in [10]. Let the process be defined by

 Vn(t)=∫u∈[0,t]^Fn(u−)dGn(u)+Wn,^Fn(t), (4.6)

where is the function, defined by (4.3) at the points and extended to a right-continuous piecewise constant function elsewhere. We define, for

 Un(a)=argmin{t∈[0,∞):Vn(t)−aGn(t)}.

Then we have the switch relation:

 ^Fn(t)≥a⟺Gn(t)≥Gn(Un(a))⟺t≥Un(a).

see, e.g., (3.35) and Figure 3.7 in Section 3.8 of [10].

We have:

 P{n1/3{^Fn(t0)−F0(t0)}≥x}=P{n1/3{Un(a0+n−1/3x)−t0}≤0},

where . Using the property that the argmin function does not change if we add constants to the object function, we get:

 Un(a0+n−1/3x)=argmin{t∈[0,∞):Vn(t)−(a0+n−1/3x)Gn(t)} =argmin{t0+n−1/3t≥0:∫u∈(t0,t0+n−1/3t]^Fn(u−)dGn(u) +Wn,^Fn(t0+n−1/3t)−Wn,^Fn(t0) −n−1/3x{Gn(t0+n−1/3t)−Gn(t0)}},

where

 ∫u∈(t,v]^Fn(u−)dGn(u)=−∫u∈[v,t)^Fn(u−)dGn(u), if v

We have:

 Wn,^Fn(t