Polynomials under Ornstein-Uhlenbeck noise and an application to inference in stochastic Hodgkin-Huxley systems

03/30/2020
by   Reinhard Höpfner, et al.
0

We discuss estimation problems where a polynomial is observed under Ornstein Uhlenbeck noise over a long time interval. We prove local asymptotic normality (LAN) and specify asymptotically efficient estimators. We apply this to the following problem: feeding noise into the classical (deterministic) Hodgkin Huxley model of neuroscience, we are interested in asymptotically efficient estimation of the parameters of the noise process.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/25/2019

Drift Estimation for a Lévy-Driven Ornstein-Uhlenbeck Process with Heavy Tails

We consider the problem of estimation of the drift parameter of an ergod...
10/15/2020

Hidden Markov Model Where Higher Noise Makes Smaller Errors

We consider the problem of parameter estimation in a partially observed ...
06/24/2020

Second order asymptotic efficiency for a Poisson process

We consider the problem of the estimation of the mean function of an inh...
05/01/2021

Local Asymptotic Mixed Normality via Transition Density Approximation and an Application to Ergodic Jump-Diffusion Processes

We study sufficient conditions for local asymptotic mixed normality. We ...
11/03/2021

Noise Inference For Ergodic Lévy Driven SDE

We study inference for the driving Lévy noise of an ergodic stochastic d...
09/07/2018

Asymptotic efficiency for covariance estimation under noise and asynchronicity

The estimation of the covariance structure from a discretely observed mu...
12/13/2018

Stochastic Image Deformation in Frequency Domain and Parameter Estimation using Moment Evolutions

Modelling deformation of anatomical objects observed in medical images c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Problems of parametric inference when we observe over a long time interval a process of type

with unknown parameters or and with a given set of functions have been considered in a number of papers; alternatively, such models can be written as

with related functions . Also, driving Brownian motion in the Ornstein-Uhlenbeck type equations has been replaced by certain Lévy processes or by fractional Brownian motion. Many papers focus on orthonormal sets of periodic functions with known periodicity. To determine estimators and limit laws for rescaled estimation errors in this case, periodicity allows to exploit ergodicity or stationarity with respect to the time grid of multiples of the periodicity. We mention Dehling, Franke and Kott [3], Franke and Kott [6] and Dehling, Franke and Woerner [4] where limit distributions for least squares estimators and maximum likelihood estimators are obtained. Rather than in asymptotic properties, Pchelintsev [26]

is interested in methods which allow to reduce squared risk –i.e. risk defined with respect to one particular loss function– uniformly over determined subsets of the parameter space, at fixed and finite sample size. Asymptotic efficiency of estimators is the topic of Höpfner and Kutoyants

[13], where sums as above are replaced by periodic functions of known periodicity whose shape depends on parameters . When the parametrization is smooth enough, local asymptotic normality in the sense of LeCam (see LeCam [23], Hajek [7], Davies [2], Pfanzagl [27], LeCam and Yang [24]; with a different notion of local neighbourhood see Ibragimov and Khasminskii [18] and Kutoyants [22]) allows to identify a limit experiment with the following property: risk –asymptotically as the time of observation tends to , and with some uniformity over small neighbourhoods of the true parameter– is bounded below by a corresponding minimax risk in a limit experiment. This assertion holds with respect to a broad class of loss functions.

With a view to an estimation problem which arises in stochastic Hodgkin-Huxley models and which we explain below, the present paper deals with parameter estimation when one observes a process

(1)

with leading coefficient so that paths of almost surely tend to . Then good estimators for the parameters based on observation of up to time show the following behaviour: whereas estimation of parameters and works at the ’usual’ rate , parameters with can be estimated at rate as . With rescaled time , we prove local asymptotic normality as in the sense of LeCam with local scale

and with limit information process

at every . As a consequence of local asymptotic normality, there is a local asymptotic minimax theorem (Ibragimov and Khasminskii [18], Davies [2], LeCam and Yang [24], Kutoyants [22], Höpfner [9]) which allows to identify optimal limit distributions for rescaled estimation errors in the statistical model (1); the theorem also specifies a particular expansion of rescaled estimation errors (in terms of the central sequence in local experiments at ) which characterizes asymptotic efficiency. We can construct asymptotically efficient estimators for the model (1), and these estimators have a simple and explicit form.

We turn to an application of the results obtained for model (1

). Consider the problem of parameter estimation in a stochastic Hodgkin-Huxley model for the spiking behaviour of a single neuron belonging to an active network

(2)

where input received by the neuron is modelled by the increments of the stochastic process

(3)

The functions and , , are those of Izhikevich [20] pp. 37–39. The stochastic model (2) extends the classical deterministic model with constant rate of input

(4)

by taking into account ’noise’ in the dendritic tree where incoming excitatory or inhibitory spike trains emitted by a large number of other neurons in the network add up and decay. See Hodgkin and Huxley [8], Izhikevich [20], Ermentrout and Terman [5] and the literature quoted there for the role of this model in neuroscience. Stochastic Hodgkin-Huxley models have been considered in Höpfner, Löcherbach and Thieullen [10], [11], [12] and Holbach [16]. For suitable data sets, membrane potential data hint to the existence of a quadratic variation which indicates the need for a stochastic modelization.

In systems (2) or (4), the variable represents the membrane potential in the neuron; the variables , , are termed gating variables and represent –in the sense of averages over a large number of channels– opening and closing of ion channels of certain types. The membrane potential can be measured intracellularly in good time resolution whereas the gating variables in the Hodgkin-Huxley model are not accessible to direct measurement.

In a sense of equivalence of experiments as in Holbach [15], the stochastic Hodgkin Huxley model (2)+(3) corresponds to a submodel of (1). This is of biological importance. Under the assumption that the stochastic model admits a fixed starting point which does not depend on , we can estimate the components and of the unknown parameter in equations (2)+(3) from the evolution of the membrane potential alone, and have at our disposal simple and explicit estimators with the following two properties and .

With local parameter parametrizing shrinking neighbourhoods of , risks

(5)

converge as to

(6)

where is two-dimensional standard Brownian motion. Here is an arbitrary constant, and any loss function which is continuous, subconvex and bounded.

We can compare the sequence of estimators for in (5) to arbitrary estimator sequences which can be defined from observation of the membrane potential up to time , provided their rescaled estimation errors –using the same norming as in (5)– are tight. For all such estimator sequences,

is always greater or equal than the limit in (6). This is the assertion of the local asymptotic minimax theorem. It makes sure that asymptotically as , it is impossible to outperform the simple and explicit estimator sequence which we have at hand.

The paper is organized as follows. Section 2 collects for later use convergence results for certain functionals of the Ornstein-Uhlenbeck process. Section 3 deals with local asymptotic normality (LAN) for the model (1): proposition 1 and theorem 1 in section 3.1 prove LAN, the local asymptotic minimax theorem is corollary 1 in section 3.1; we introduce and investigate estimators for in sections 3.2 and 3.3; theorem 2 in section 3.4 states their asymptotic efficiency. The application to parameter estimation in the stochastic Hodgkin-Huxley model (2)+(3) based on observation of the membrane potential is the topic of the final section 4: see theorem 3 and corollary 2 there.

2 Functionals of the Ornstein Uhlenbeck process

We state for later use properties of some functionals of the Ornstein Uhlenbeck process

(7)

with fixed starting point . and are fixed, and is the invariant measure of the process in (7); is defined on some .

Lemma 1: For defined by (7), for every and , we have almost sure convergence as

Proof: ([14] lemma 2.2, [15] lemma 2.5, compare to [1] thm. 1.6.4 p. 33)
1) We consider functions which satisfy . The case is the well known ration limit theorem for additive functionals of the ergodic diffusion ([22], [9] p. 214). Assuming that the assertion holds for , define . Stieltjes product formula for semimartingales with paths of locally bounded variation yields

Under our assumption, both terms on the right hand side are of stochastic order : since converges to almost surely as , the second term on the right hand side behaves as as ; the first term on the right hand side behaves as . This proves the assertion for .
2) We consider functions such that . For arbitrarly large but fixed, step 1) applied to functions and yields almost sure convergence

as . Since and , we have and as , and comparison

of trajectories gives the result in this case. 

Lemma 2: For as above we have for every

Proof: This is integration by parts

(8)

and lemma 1 (with and ) applied to the right hand side.  

Lemma 3: For defined by (7), for every , we have convergence in law

as to the limit

where is standard Brownian motion.

Proof: Rearranging SDE (7) we write

and have for every

(9)

In case , the right hand side is , and the scaling property of Brownian motion combined with ergodicity of yields weak convergence as asserted. In case , lemma 2 transforms the first term on the right hand side of (9), and we have

(10)

The martingale convergence theorem (Jacod and Shiryaev [21], VIII.3.24) shows that

(11)

converges weakly in the Skorohod path space to a continuous limit martingale with angle bracket , i.e. to

Scaled in the same way, the first term on the right hand side of (10)

is negligible in comparison to (11), uniformly on compact -intervals, by ergodicity of .  

Lemma 4: a) For every we have an expansion

(12)

where almost surely. In case we have

b) For every , we have joint weak convergence as

(13)

with limit law

Proof: Part a) is (10) plus scaling as in the proof of lemma 3. For different , the expansions (12) hold with respect to the same driving Brownian motion from SDE (7): this gives b). 

3 The statistical model of interest

Consider now a more general problem of parameter estimation from continuous-time observation of

(14)

where is a sufficiently smooth deterministic function which depends on some finite-dimensional parameter , and where the Ornstein Uhlenbeck process , unique strong solution to

(15)

depends on a parameter . The starting point is deterministic. Then solves the SDE

(16)

where depending on and is given by

(17)

Conversely, if a process is solution to an SDE of type (16), then solving we get a representation (14) for where

(18)

For examples of parametric models of this type, see e.g. 

[3], [6], [13], [26], and example 2.3 in [14]. The constant in (15) is fixed and known: the quadratic variation of the semimartingale , to be calculated from the trajectory observed in continuous time, cannot be considered as a parameter.

We wish to estimate the unknown parameter based on time-continuous observation of in (14) over a long time interval, in the model

(19)

where trajectories of tend to almost surely as . Thus the parametrization is

(20)

and in SDE (16) which governs the observation , depending on has the form

(21)

3.1 Local asymptotic normality for the model (14)+(19)

Let denote the canonical path space for continuous processes; with the canonical process (i.e.  for , ) and ,

is the canonical filtration. Let denote the law on of the process in (14) under , cf. (20). By (14)–(16) and (19)+(21), the canonical process on under solves

(22)

For pairs in

, probability measures

, are locally equivalent relative to , and we write

With the martingale part of under , the likelihood ratio process of w.r.t.  relative to ([25], [18], [21], [22]; [9] p. 162) is

(23)

In the integrand,

so we exploit (14) to write for short

(24)

where under is the Ornstein Uhlenbeck process (15), and where

Localization at will be as follows: with notation

we insert

in place of into (23); finally we rescale time. Define

(25)

With local parameter and local scale (25) at , we obtain from (23)+(24)

(26)

where is some process of remainder terms, a martingale with respect to and

(27)

(again by (14), stands for under ), and the angle bracket of under .

Proposition 1 : a) For fixed , components of converge -almost surely as to those of the deterministic process

(28)

For every , the matrix is invertible.

b) Let denote a two-dimensional standard Brownian motion with components and . In the cadlag path space ([21], chapters VI and VIII), martingales under converge weakly as to the limit martingale

(29)

Proof : The proof is in several steps.
1) We specify the angle bracket process of under . Its state at time

is a symmetric matrix of size . Taking into account the norming factor in front of in (27) we consider throughout . The entries are given as follows. We have

for all . In the first line of we have

in first and last position, and in-between for

For the last column of , the first entry has been given above, the last entry is

in-between we have for

It remains to consider the three integrals which are not deterministic: here lemma 1 establishes almost sure convergence

as under . This proves almost sure convergence of the components of