# Asymptotic expansion for the Hartman-Watson distribution

The Hartman-Watson distribution with density f_r(t) is a probability distribution defined on t ≥ 0 which appears in several problems of applied probability. The density of this distribution is expressed in terms of an integral θ(r,t) which is difficult to evaluate numerically for small t→ 0. Using saddle point methods, we obtain the first two terms of the t→ 0 expansion of θ(ρ/t,t) at fixed ρ >0. As an application we derive, under an uniformity assumption in ρ, the leading asymptotics of the density of the time average of the geometric Brownian motion as t→ 0. This has the form P(1/t∫_0^t e^2(B_s+μ s) ds ∈ da) = (2π t)^-1/2 g(a,μ) e^-1/t J(a) (1 + O(t)), with an exponent J(a) which reproduces the known result obtained previously using Large Deviations theory.

## Authors

• 1 publication
01/31/2020

### Revisiting integral functionals of geometric Brownian motion

In this paper we revisit the integral functional of geometric Brownian m...
01/10/2022

### An examination of the spillage distribution

We examine a family of discrete probability distributions that describes...
10/15/2020

### On Non Asymptotic Expansion of the MME in the Case of Poisson Observations

The problem of parameter estimation by observations of inhomogeneous Poi...
07/21/2018

### On Numerical Estimation of Joint Probability Distribution from Lebesgue Integral Quadratures

An important application of introduced in [1] Lebesgue integral quadratu...
12/31/2020

### Asymptotic expansion of a variation with anticipative weights

Asymptotic expansion of a variation with anticipative weights is derived...
12/16/2010

### Adaptive Cluster Expansion (ACE): A Multilayer Network for Estimating Probability Density Functions

We derive an adaptive hierarchical method of estimating high dimensional...
01/26/2018

### Average values of functionals and concentration without measure

Although there doesn't exist the Lebesgue measure in the ball M of C[0,1...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

The Hartman-Watson distribution was introduced in the context of directional statistics [15] and was studied further in relation to the first hitting time of certain diffusion processes [17]. This distribution has received considerable attention due to its relation to the law of the time integral of the geometric Brownian motion, see Eq. (3) below [25].

The Hartman-Watson distribution is given in terms of the function defined as

 (1) θ(r,t)=r√2π3teπ22t∫∞0e−ξ22te−rcoshξsinhξsinπξtdξ.

The normalized function

defines the density of a random variable taking values along the positive real axis

, distributed according to the Hartman-Watson law [15].

The function appears in the law of the additive functional of a standard Brownian motion

 (2) A(μ)t=∫t0e2(Bs+μs)ds.

Functionals of this type appear in the pricing of Asian options in the Black-Scholes model [10], in the study of diffusion processes in random media [7], and in actuarial science [5]

. This integral appears also in the distributional properties of stochastic volatility models with log-normally distributed volatility, such as the

log-normal SABR model [1] and the Hull-White model [13, 14].

An explicit result for the joint distribution of

was given by Yor [25]

 (3) P(A(μ)t∈du,Bt+μt∈dx)=eμx−12μ2texp(−1+e2x2u)θ(ex/u,t)dudxu,

where the function is given by (1).

A precise evaluation of is required for the exact simulation of the time integral of the geometric Brownian motion , conditional on the terminal value of the Brownian motion . This problem appears for example in the simulation of the SABR model, see [6]. The paper [6] proposed an exact simulation method by inverting the Laplace transform of .

The Yor formula yields also the density of by integration over . The usefulness of this approach is limited by the difficulty of evaluating numerically the integral in (1) for small , due to the rapidly oscillating factor [3, 5]. Alternative numerical approaches which avoid this issue were presented in [4, 16].

For this reason considerable effort has been devoted to applying analytical methods to simplify Yor’s formula. For particular cases simpler expressions as single integrals have been obtained for the density of by Comtet et al [7] and Dufresne [9]. See the review article by Matsumoto and Yor [19] for an overview of the literature.

In the absence of simple exact analytical results, it is important to have analytical expansions of in the small- region. Such an expansion has been derived by Gerhold in [12], by a saddle point analysis of the Laplace inversion integral of the density .

In this paper we derive the asymptotics of the function at fixed using the saddle point method. This regime is important for the study of the small- asymptotics of the density of following from the Yor formula (3). The resulting expansion turns out to give also a good numerical approximation of for all . The expansion has the form

 (4) θ(ρ/t,t)=e−1t(F(ρ)−π22)(12πtG(ρ)+G1(ρ)+O(t)).

The leading order term proportional to is given in Proposition 1, and the subleading correction is given in the Appendix.

The paper is structured as follows. In Section 2 we present the leading order asymptotic expansion of the integral at fixed . The main result is Proposition 1. The leading term in this expansion is shown to give a reasonably good approximation for over the entire range of . Surprisingly, this approximation has also the correct asymptotics, with a coefficient which approaches the exact result in the limit. In Section 3 we compare this asymptotic result with existing results in the literature on the small expansion of the Hartman-Watson distribution [3, 12]. In Section 4 we present the application of this expansion to obtain the leading asymptotics of the density of the time averaged geometric Brownian motion . Assuming that the error term in the asymptotic expansion of is uniformly bounded in , we obtain the leading asymptotics of this density in the form . The exponential factor reproduces a known result obtained previously using Large Deviations theory [2, 21]. An Appendix gives an explicit result for the subleading correction.

## 2. Asymptotic expansion of θ(ρ/t,t) as t→0

We study here the asymptotics of as at fixed . This regime is different from that considered in [12], which studied the asymptotics of as at fixed . We derive this asymptotics in this section.

###### Proposition 1.

The asymptotics of the Hartman-Watson integral defined in (1) as is

 (5) θ(ρ/t,t)=e−1t(F(ρ)−π22)(12πtG(ρ)+G1(ρ)+O(t)).

The function is given by

 (6) F(ρ)=⎧⎨⎩12x21−ρcoshx1+π22,0<ρ<1−12y21+ρcosy1+πy1,ρ>1

and the function is given by

 (7) G(ρ)=⎧⎪ ⎪⎨⎪ ⎪⎩ρsinhx1√ρcoshx1−1,0<ρ<1ρsiny1√1+ρcosy1,ρ>1

Here is the solution of the equation

 (8) ρsinhx1x1=1

and is the solution of the equation

 (9) y1+ρsiny1=π.

The subleading correction is where is given in explicit form in the Appendix.

Plots of the functions and are shown in Figure 1. The properties of these functions are studied in more detail in Section 2.1.

###### Proof of Proposition 1.

The function can be written as

 (10) θ(ρ/t,t)=ρ/t√2π3teπ22t∫∞0e−1t[12ξ2+ρcoshξ]sinhξsinπξtdξ =ρ/t√2π3teπ22t12∫∞−∞e−1t[12ξ2+ρcoshξ]sinhξsinπξtdξ =ρ√2π3t3eπ22t14i(I+(ρ,t)−I−(ρ,t)),

with

 (11) I±(ρ,t):=∫∞−∞e−1t[12ξ2+ρcoshξ∓iπξ]sinhξdξ.

These integrals have the form with .

The asymptotics of as can be obtained using the saddle-point method, see for example Sec. 4.6 in Erdélyi [11] and Sec. 4.7 of Olver [20].

We present in detail the asymptotic expansion for of the integral

 (12) I+(ρ,t)=∫∞−∞e−1th(ξ)sinhξdξ

where we denote for simplicity . The integral is treated analogously.

i) . The saddle points are given by the solution of the equation . There are three saddle points at with the solution of the equation . The second derivative of at these points is , and .

The contour of integration is deformed from the real axis to the contour shown in the left panel of Fig. 2, consisting of three arcs of curves of steepest descent passing through the three saddle points. Along these arcs we have . Denoting , the path is given by

 (13) y={π,|x|≤x1y0(x),|x|>x1

where is the positive solution of the equation .

The integral is a sum of three integrals along each piece of the path. The real part of the integrand is odd and the imaginary part is even under

. This follows from noting that we have and . This implies that i) the integral from B to A vanishes because the integrand is odd, and ii) the real parts of the integrals along and are equal and of opposite sign, and their imaginary parts are equal. This gives

 (14) θ(ρ/t,t)=ρ√2π3t3eπ22t12i(I(−∞,B]+I[A,∞))=ρ√2π3t3eπ22tImI[A,∞).

Thus it is sufficient to evaluate only . This integral is written as

 (15) I[A,∞)(ρ,t)=∫∞Ae−1th(ξ)sinhξdξ=e−1th(A)∫∞0e−1tτsinhξh′(ξ)dτ,

where we defined . This is expanded around as

 (16) τ=h(ξ)−h(A)=a2(ξ−A)2+a3(ξ−A)3+a4(ξ−A)4+O((ξ−A)5)

with . Noting , this series is inverted as

 (17) ξ−A=−i√2τ|h′′(A)|+O(τ)

The second factor in the integrand of (15) is also expanded in as

 (18) sinhξh′(ξ)=sinhA+coshA(ξ−A)+O((ξ−A)2)h′′(A)(ξ−A)+O((ξ−A)2)=sinhAh′′(A)1ξ−A(1+O(ξ−A))

Substituting here the expansion (17) gives

 (19) sinhξh′(ξ)=isinhx1|h′′(A)|√|h′′(A)|21√τ+O(τ0)

More generally

 (20) sinhξh′(ξ)=g0√τ+g1+g2√τ+O(τ).

The inequality implies that are imaginary, while are real.

By Watson’s lemma [20], the resulting expansion can be integrated term by term. The leading asymptotic contribution to is

 (21) I[A,∞)(ρ,t)=e−1th(A)∫∞0e−1tτsinhξh′(ξ)dτ=isinhx1√2|h′′(A)|e−1th(A)(∫∞0e−1tτdτ√τ+O(τ0)).

The integral is . Substituting into (14) reproduces the quoted result by identifying . Since are real, the term in (20) does not contribute to , and the leading correction comes from the term. This is given in explicit form in the Appendix.

ii) . There are several saddle points along the imaginary axis. We are interested in the saddle point at with the solution of (9). At this point the second derivative of is .

Deform the integration contour into the curve in the right panel of Fig.  2. This is a steepest descent curve , given by , the positive solution of the equation . The contour passes through the saddle point at .

The integral . is the sum of the two integrals on the two halves of the contour. As in the previous case, the sum is imaginary since along the contour is real, and is even in . Noting that , the first term is odd in and cancels out, and only the second (imaginary) term gives a contribution. We have a result similar to (14)

 (22) θ(ρ/t,t)=ρ√2π3t3eπ22t12i(I(−∞,S]+I[S,∞))=ρ√2π3t3eπ22tImI[S,∞).

As before, it is sufficient to evaluate only the integral, which is

 (23) ∫∞Se−1th(ξ)sinhξdξ=e−1th(S)∫∞0e−1tτsinhξh′(ξ)dτ

where we introduced . This diference is expanded around as

 (24) τ=h(ξ)−h(S)=12h′′(S)(ξ−S)2+O((ξ−S)3)

which is inverted as (recall )

 (25) ξ−S=√2τh′′(S)+O(τ)

The integrand is also expanded in as

 (26) sinhξh′(ξ)=sinhS+coshS(ξ−S)+O((ξ−S)2)h′′(S)(ξ−S)+O((ξ−S)2)=sinhSh′′(S)1ξ−S(1+O((ξ−S))).

Substituting here (24) this can be expanded in powers of as

 (27) sinhξh′(ξ)=g0√τ+g1+g2√τ+O(τ).

Again similar to the previous case, are imaginary, while are real, such that only the even order terms contribute.

By Watson’s lemma [20] we can exchange expansion and integration, and we get the leading asymptotic term

 (28) ∫∞0e−1tτsinhξh′(ξ)dτ=isiny1√2h′′(S)∫∞0e−1tτdτ√τ(1+O(τ0))

which is evaluated exactly as and gives . The subleading term is given in explicit form in the Appendix.

### 2.1. Properties of the functions F(ρ) and G(ρ)

Let us examine in more detail the properties of the functions and appearing in Proposition 1. We start by studying the behavior of the solutions of the equations (8) and (9) for respectively. For the solution of (8) approaches as and it increases to infinity as . For , the solution of (9) starts at for , and decreases to zero as .

The derivative can be computed exactly

 (29) F′(ρ)={−coshx1,0<ρ<1cosy1,ρ>1

This shows that the minimum of is reached for that value of for which . From (9) we get that this is , and at this point we have .

We can obtain also the asymptotics of for very small and very large arguments.

###### Proposition 2.

i) As the function has the asymptotics

 (30) F(ρ)=12L2+Llog(2L)−L+log2(2L)+π22+o(1)

with .

ii) As the function has the asymptotics

 (31) F(ρ)=ρ+π22(1+ρ)+O(ρ−3).

iii) Around , the function has the expansion

 F(ρ) = π22−1−(ρ−1)+32(ρ−1)2−65(ρ−1)3+351350(ρ−1)4 −108125(ρ−1)5+372903490000(ρ−1)6+O((ρ−1)7).
###### Proof.

i) As , the solution of the equation (8) approaches as

 (33) x1=L+log(2L)+o(1)

with . This follows from writing Eq. (8) as

 (34) 1ρ=sinhx1x1=ex12x1(1−e−2x1)

Taking the logs of both sides gives

 (35) x1=L+log(2x1)−log(1−e−2x1).

This can be iterated starting with the zero-th approximation , which gives (33).

Eliminating in terms of using (8) gives

 (36) F(ρ)=12x21−x1tanhx1+π22

Substituting (33) into this expression gives the quoted result.

ii) As , the solution of the equation (9) approaches . This equation is approximated as

 (37) (1+ρ)y1+16ρy31+ρO(y51)=π

which is inverted as

 (38) y1=π1+ρ−π36ρ(1+ρ)4+O(ρ−5)

Substituting into gives the result quoted above.

iii) Consider first the case . As , we have . From (8) we get an expansion . Substituting into (36) gives the result (2). The same result result is obtained for the case, when is approached from above.

We give next the asymptotics of .

###### Proposition 3.

i) As the asymptotics of is

 (39) G(ρ)=√L−ρ2√L+log2L+12√L+o(1),L:=log(1/ρ).

ii) As we have

 (40) G(ρ)=πρ(1+ρ)3/2+O(ρ−5/2).

iii) Around the function has the following expansion

 (41) G(ρ) = √3{1+15(1ρ−1)−435(1ρ−1)2+225(1ρ−1)3+O((1ρ−1)4)}.
###### Proof.

i) The asymptotic expansion of for was already obtained in (33). Substitute this into

 (42) G(ρ)=x1√x1tanhx1−1

which follows from (7) by eliminating using (8). Expanding the result gives the result quoted.

ii) Eliminating from the expression (7) for for gives

 (43) G(ρ)=π−y1√1+π−y1tany1

Using here the expansion of from (38) gives the result quoted.

iii) The proof is similar to that of point (iii) in Proposition 2. ∎

## 3. Numerical tests

The leading term of the expansion in Proposition 1 can be considered as an approximation for for all , by substituting . Consider the approximation for defined as

 (44) ^θ(r,t):=12πtG(rt)e−1t(F(rt)−π22).

We would like to compare this approximation with the leading asymptotic expansion of [12], which is obtained by taking the limit at fixed . This is given by Theorem 1 in [12]

 (45) θG(r,t)=√eπ√u0(t)logu0(t)−2−2ρe−tu0(t)+√2u0(t)

where is the solution of the equation

 (46) t=logu2√2u−ρ√2u+14u,ρ:=logr2√2.

The correction to Eq. (45) is of order .

Table 1 shows the numerical evaluation of for and several values of , comparing also with the small expansion in Eq. (45). They agree well for sufficiently small but start to diverge for larger . In the last column of Table 1 we show also the result of a direct numerical evaluation of by numerical integration in Mathematica.

Figure 3 compares the approximation (black curves) against (blue curves) and direct numerical integration (red curves). We note the same pattern of good agreement between and at small , and increasing discrepancy for larger , which explodes to infinity for sufficiently large . The reason for this explosive behavior is the fact that the denominator in Eq. (45) approaches zero as approaches a certain maximum value. For larger than this value the denominator becomes negative and the approximation ceases to exist.

This maximum value is given by the inequality . For this condition imposes the upper bound .

We show in Figure 4 plots of vs at . The vertical scale of the plot is chosen as in Figure 1 (left) of Bernhart and Mai [4], which shows the plots of for the same parameters, obtained using a precise numerical inversion of the Laplace transform of . The shapes of the curves are very similar with those shown in [4].

### 3.1. Asymptotics for t→0 and t→∞ of the approximation ^θ(r,t)

We study in this Section the asymptotics of the approximation defined in (44) for and , and compare with the exact asymptotics of in these limits obtained by Barrieu et al [3] and Gerhold [12].

As expected, the asymptotics matches precisely the exact asymptotics of . Although the approximation was derived in the small limit, it is somewhat surprising that it matches also the correct asymptotics of for , up to a multiplicative coefficient, which however becomes exact as .

Small asymptotics . The leading asymptotics of as was obtained in Barrieu et al [3]

 (47) θ(r,t)∼e−12tlog2t.

This was refined further by Gerhold as in Eq. (45). Using the small approximation (see Eq. (6) in Gerhold’s paper), the improved expansion (45) gives

 (48) θG(r,t)∼log(1/t)t1√2loglog(1/t)+2log(2/t)e−12tlog2(1/t).

The expansion of can be obtained using the asymptotics of in points (i) of the Propositions 2 and 3, respectively. This gives

 (49) ^θ(r,t)∼1t√log1/(rt)e−12tlog2(rt)−1tlog(1/(rt))log(2log(1/(rt))+1tlog(1/(rt)),t→0

The exponential factor agrees with the asymptotics of the exact result. Also the leading dependence of the multiplying factor reproduces the improved expansion following from [12].

Large asymptotics . The asymptotics of is given in Remark 3 in Barrieu et al [3] as

 (50) θ(r,t)∼cr1t3/2,cr=1√2πK0(r).

The large asymptotics of is related to the asymptotics of . As we have from Prop. 2 point (ii) and from Prop. 3 point (ii). Substituting into (44) and keeping only the leading contributions as gives

 (51) ^θ(r,t)∼r2(1+rt)3/2e−r∼12√rt−3/2e−r.

The dependence has the same form as the exact asymptotics (50).

Let us examine also the coefficient of in (51) and compare with the exact result for in (50). The leading asymptotics of the function for is . The exact coefficient becomes, for

 (52) cr=121√re−r+O(1/r)

The leading term in this expansion matches precisely the coefficient obtained from . We conclude that the right tail asymptotics of approaches the same form as the exact asymptotics in the limit .

### 3.2. Error estimates

We examine here the error of the approximation for obtained by keeping only the leading asymptotic term in Proposition 1. Define the normalized error

 (53) ε(ρ,t):=e1t[F(ρ)−π22][θ(ρ/t,t)−^θ(ρ/t,t)].

Using the subleading correction to the asymptotic expansion of Proposition 1, this error term has the leading term

 (54) ε(ρ,t)=14πG(ρ)~g2(ρ)+O(t)

where is given in Eq. (89). The plot of this function is given in the right panel of Figure 6. Using the asymptotics of from Proposition 3 and of from the Appendix, we get that the asymptotic error (54) vanishes as and . From the plot of this function in Figure 6 we see that it is bounded as for all .

Figure 5 shows the error term obtained by numerical evaluation of , for several values of . The results are in good qualitative agreement with the asymptotic result (54), which they approach as decreases. Similar results are obtained for ; we highlighted the region shown as the error is maximal in this region. Numerical stability issues in the evaluation of limited these tests to . The error shows a decreasing trend as decreases. Assuming that this trend continues all the way down to , these numerical tests suggest an error bound uniform in of the form

 (55) supρ≥0|ε(ρ,t)|≤C

for all sufficiently small.

## 4. Application: The asymptotic distribution of 1tA(μ)t

As an application of the result of Proposition 1, we give here the leading asymptotics of the distribution of the time average of the gBM .

The distribution of can be obtained from Yor’s formula (3) by integration over . Introducing a new variable this is expressed as the integral

 (56) P(1tA(μ)t∈da)=∫x∈Reμx−12μ2texp(−1+e2x2at