Simulation of non-Lipschitz stochastic differential equations driven by α-stable noise: a method based on deterministic homogenisation

04/21/2020 ∙ by Georg A. Gottwald, et al. ∙ 0

We devise an explicit method to integrate α-stable stochastic differential equations (SDEs) with non-Lipschitz coefficients. To mitigate against numerical instabilities caused by unbounded increments of the Lévy noise, we use a deterministic map which has the desired SDE as its homogenised limit. Moreover, our method naturally overcomes difficulties in expressing the Marcus integral explicitly. We present an example of an SDE with a natural boundary showing that our method respects the boundary whereas Euler-Maruyama discretisation fails to do so. As a by-product we devise an entirely deterministic method to construct α-stable laws.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic differential equations (SDEs) are frequently used to capture model uncertainty in as diverse areas as finance, engineering, biology and physics. The noise driving the SDE is often heuristically introduced based on the experience of the modeller. In certain cases, the driving noise is derived by means of functional limit theorems, eg. in the context of fast-slow systems or weakly coupled systems of distinguished degrees of freedom with an infinite reservoir

[25]. Recently, SDEs driven by non-Gaussian noise, in particular by Lévy processes which involve discontinuous jumps of all sizes, have attracted attention. Anomalous diffusion and Lévy flights are found in diverse systems ranging from biology [15, 71, 57, 23, 7], chemistry [61, 58], fluid dynamics [65] to climate science [16, 63, 35].

We consider here SDEs of the form

(1.1)

where and denotes an -dimensional Lévy process. The diamond denotes that stochastic integrals are to be interpreted in the Marcus sense [50]. (We refer to [4, p. 272] for a discussion of the Marcus integral.) The drift term and diffusion term are assumed to be smooth but we are particularly interested in situations where they are not globally Lipschitz on . As is standard in the literature on numerical analysis of SDEs, we use the word “non-Lipschitz” when referring to terms that are smooth but not globally Lipschitz.

The Marcus interpretation for the stochastic integral in (1.1) is known to arise naturally in SDEs driven by Lévy processes, since it is the integral that transforms under the usual laws of calculus [4, Theorem 4.4.28]. As such, it plays the same role for Lévy processes as the Stratonovich integral for Brownian motion. Accordingly, if an SDE driven by a Lévy process is to model a physical system and is therefore derived as a rough limit of an inherently smooth underlying microscopic dynamical system, then one anticipates that the driving noise should be interpreted in the sense of Marcus. Indeed, for deterministic fast-slow systems converging to an SDE driven by a Lévy process, the Marcus interpretation has been proved to prevail by [10, 27]. (However, if more than one time-scale is involved, then the noise may be Marcus, Itô or neither [44, 9].)

The numerical simulation of SDEs of the form (1.1) poses three challenges: (i) the Marcus integral, (ii) non-Lipschitz drift and diffusion terms, (iii) nonexplicit nature of the densities for the increments of . These challenges are unrelated and typically require separate attention; some are better understood than others. We present here a method which naturally addresses all three problems simultaneously. Before we present the ideas behind our method, we discuss the particular problems of each challenge.

(i) Marcus integrals are well-defined but involve cumbersome expressions and sums over infinitely many jumps [4, 12, 9]. In particular, the situation is quite different from the Itô-Stratonovich correction where one can pass between Itô and Stratonovich integrals by modifying the drift term. When numerically approximating Marcus integrals, several methods exist to discretise the integral (see [5, 31, 20] and references therein). These methods typically use that a symmetric Lévy process can be approximated as a sum of a compound Poisson process and a Brownian motion [6]

. However, for nonsymmetric Lévy processes, Brownian motion is not able to capture the skewness of the small jumps, presenting further difficulties for the numerical simulation of the corresponding Marcus SDE.

(ii) A well-known problem arises when numerically simulating SDEs with non-Lipschitz drift and diffusion terms. To illustrate why this may present a problem, consider the SDE with constant diffusion and non-Lipschitz drift term, where is Brownian motion. (The nature of the noise is not relevant in the following argument, just that the increments are unbounded). Its Euler-Maruyama discretisation [51, 54, 40] is given by

with normally distributed increments

. Since such increments are unbounded, for each fixed time step

there is a non-zero probability that increments are so large as to lead to a numerical instability whereby the

explode alternating in sign. In particular, Euler-Maruyama fails to strongly converge in the mean-square sense and also fails to weakly converge to solutions of the SDE [36]. Recently, several numerical methods were designed to overcome the problem of non-Lipschitz drift terms [33, 55, 37, 60, 69, 14, 49, 41, 38]. However, to the best of our knowledge, no methods have been designed to treat with the presence of non-Lipschitz diffusion terms.

(iii) The increments of Brownian motion are normally distributed with density function given by the well-known Gaussian formula. For the increments of Lévy processes, the densities are not given explicitly in general. Various methods have been devised that numerically generate the desired probability densities [8]. Of the three issues we have mentioned, this is the only one that could be said to be completely resolved, though even here there is the question of combining it with methods dealing with issues (i) and (ii).

To bypass the cumbersome direct approximation of the Marcus integral and the difficulties associated with nonsymmetric Lévy processes mentioned above, and to avoid the problem of unbounded noise increments, we propose an entirely deterministic method, based on homogenisation, to integrate SDEs of the form (1.1). In particular, we use that a discrete deterministic fast-slow system reduces in the limit of infinite time scale separation to an SDE [27, 39, 11, 10]. In the case of intermittent fast dynamics, the resulting SDE is driven by a Lévy process, moreover the noise is of Marcus type [27, 10]. We employ statistical limit theorems to design an explicit fast intermittent map and an explicit observable of the fast dynamics that yields -stable increments with user-specified values of the driving Lévy process. The jumps of the Lévy process are approximated by many small jumps generated by the fast dynamics. Since the fast dynamics evolves on a compact set, these increments are naturally bounded, which mitigates numerical instability caused by the non-Lipschitz terms.

The paper is organised as follows. We review the definitions of -stable laws in Section 2 and provide algorithms to deterministically generate -stable laws and numerical illustrations of its accuracy. Section 3 contains the corresponding material for Lévy processes. Section 4 constitutes the main result of our work and introduces the numerical method to integrate SDEs driven by a Lévy process using deterministic homogenisation. Two examples of scalar SDEs are used to illustrate the method. In Example 1, our results are in line with Euler-Maruyama discretisation (with taming). However, Example 2 has a natural boundary at which is treated correctly by our method but not by the Euler-Maruyama method. The proofs for our methods are provided in Section 5. We conclude with a discussion and an outlook in Section 6.

2 Generating -stable laws

In this section, we show how to generate stable laws deterministically. In Subsection 2.1, we review the definitions. In Subsection 2.2, we describe the Thaler map which will be used to generate the fast intermittent dynamics. Our numerical algorithm for generating stable laws is presented in Subsection 2.3. Numerical illustrations of its accuracy are given in Subsection 2.4.

2.1 Definition of stable laws

A random variable

is called a (strictly) stable law if there exist constants such that independent copies of satisfy

Stable laws are completely classified, see 

[21, 4]. If , then is normally distributed, where , and we can take . We are interested here in the case .

There are various parameters (with various notational conventions). The most important is the stability parameter or scaling exponent . A suitable choice of is then given by . The case corresponds to the normal distribution described above, while

corresponds to the Cauchy distribution which is a special case that we do not consider in this paper. We restrict attention to the remaining stable laws

whose characteristic function is given by

where , and . Such stable laws satisfy for and . In the case the stable law is centered, i.e. . A stable law is called one-sided (or totally skewed) if and symmetric if .

Remark 2.1

It follows from the definitions that for and .

2.2 The Thaler map

In this section, we show how to generate all stable laws of the type with , , , using a deterministic dynamical system. In particular we shall use observables of maps introduced in the study of intermittency by Pomeau & Manneville [56]. Particularly convenient for our purposes is the family of maps considered by Thaler [67, 68]

(2.1)

Here, is a real parameter. Let be the unique solution to the equation

(2.2)

There are two branches defined on the intervals , . See Figure 1 for a depiction of the Thaler map.

Figure 1: Thaler map for showing the two branches with domains and where .
Remark 2.2

A useful alternative expression for the Thaler map is

From this it is clear that has two increasing full branches and that is given by the formula mentioned above.

Unlike other intermittent maps such as the map considered by Manneville [48] or the Liverani-Saussol-Vaienti map [46], the Thaler map allows for analytic expressions, both for the map and the invariant density. In particular, for each there exists a unique invariant probability density where

(2.3)

For , the density is still well-defined and invariant, but it is nonintegrable so the corresponding invariant measure is infinite. For , the Thaler map reduces to the uniformly expanding doubling map with corresponding to Lebesgue measure on the unit interval; here correlations decay exponentially. For , the Thaler map is nonuniformly expanding with a neutral fixed point at and correlations decay algebraically with rate [34, 73]. The rate is sharp by [29, 62]. This slow down in the decay of correlation as increases is caused by the trajectory spending prolonged times near the neutral fixed point . Figure 2 shows a trajectory for where one clearly sees the laminar dynamics near .

Figure 2: Time series for the Thaler map with corresponding to

The above discussion shows that correlations are summable if and only if

, leading to the following central limit theorem (CLT). Let

be a Hölder observable and suppose that has mean zero with respect to the invariant probability measure . Define the Birkhoff sum

and the variance

(typically nonzero) via the Green-Kubo formula . Regarding as a family of random variables on the probability space (where the randomness exists solely in the initial condition used to compute ) it follows from Liverani [45] that the CLT holds: .

For , correlations are not summable and the CLT breaks down for observables with that “see” the neutral fixed point at . Heuristically the reason for this is that the Birkhoff sum experiences ballistic behaviour with almost linear behaviour in the laminar region near and the small jumps of size accumulate into a single large jump incompatible with the CLT. Indeed, Gouezel [28] (see also [74]) proved that for , the CLT is replaced by a one-sided stable limit law with and .

For , the density is not integrable and the Birkhoff sums (normalised) do not converge in distribution to a stable law. However, the method in [30] reduces via inducing [52] to an “induced” system on . A calculation using (2.2) shows that which is finite for all . Hence we can define a probability measure on with density . For the induced system on the probability space , convergence to stable laws was studied by [2] and holds in the full range .

To prove convergence to stable laws in this section and to Lévy processes in Section 3, we use the induced system on , and hence are able to deterministically generate -stable random variables and processes for . However, for our main application to SDEs in Section 4, we have to work with the full system on and hence our results there are restricted to .

The aim in this section is to specify appropriate observables of the Thaler map leading to stable laws as limits in distribution.

2.3 Numerical algorithm for generating stable laws

We begin by describing how to generate one-sided stable laws, i.e. those with where all jumps are in the same direction (positive or negative).

Fix and consider the Thaler map (2.1) with , Define the set where is as given in (2.2). Starting with a randomly chosen initial condition (random with respect to the invariant density in (2.3) restricted to ), we compute the iterates of the map noting the return times to . More precisely, let be least such that . Then let be least such that . Inductively, once are defined, we let be least such that . Note that is a sequence of random variables where the randomness originates from the choice of .

Define

(2.4)

where

(2.5)
Theorem 2.3

Fix . Then

That is,

for all .

Remark 2.4

By Remark 2.1, we can use Theorem 2.3 to generate all one-sided -stable laws .

We now extend to the case of general (two-sided) stable laws with , , . Again, we can suppose without loss that .

Let , , be the sequence of random variables defined in above. Also, define the random variable with and let , , be a sequence of independent copies of .

Theorem 2.5

Fix , . Then

Theorems 2.3 and 2.5 are proved in Section 5.

2.4 Numerical results for stable laws

We now illustrate that the algorithms described in Theorems 2.3 and 2.5 are able to reliably construct -stable laws. The Thaler map is iterated for as many times as it takes to produce associated return times for some specified . This data is then fed into Theorems 2.3 and 2.5. Note that the required number of iterates of is and depends on the initial condition , which is chosen randomly using the invariant density given by (2.3), restricted to .

In Figure 3, we compare the results of our deterministic algorithm for approximating the probability density for -stable laws with a direct numerical routine (we used the function stblpdf from the software package STABLE [59]). We take , and , and

. The two methods agree very well. The deterministically generated stable law was estimated from

realisations (i.e. different initial conditions ) and we took . To achieve data with the desired length , the Thaler map was iterated for an average of times. The largest number of iterations needed for the realisations used here was more than .

Next we consider an example with . Figure 4 shows the probability density for -stable laws with , and , and . We used here realisations of data of length for and and for . Due to the higher probability to experience large jumps for compared to , the number of iterations of the Thaler map needed to generate an induced time series of length is much larger. Here the Thaler map was iterated for an average of times for and and for times for . The largest number of iterations needed for the realisations used here was more than for and and for .

Figure 3: Probability density functions for -stable laws with , and (top): , (middle): and (bottom): . The blue curve (open circles) uses the function stblpdf from the software package STABLE [59]; the red continuous line shows the splined empirical histogram of the deterministic induced dynamics.
Figure 4: Probability density functions for -stable laws with , and (top): , (middle): and (bottom): . The blue curve (open circles) uses the function stblpdf from the software package STABLE [59]; the red continuous line shows the splined empirical histogram of the deterministic induced dynamics.
Remark 2.6

We expect that rigorous error rates can be obtained in Theorem 2.3 and 2.5 and that these rates will be poorest as approaches and from below. Indeed, it is well-known even for sums of i.i.d. random variables that convergence rates to an -stable law are slow for close to and close to . Indicative upper bounds on rates of convergence (ignoring logarithmic factors) for the distribution functions [13, 32] are for and for with improvements for if . Similar estimates for in a deterministic setting that is almost the same as the one here can be found in [66]. Further work would be required to estimate the implied “big O” constant. We do not address these issues further here.

3 Generating -stable Lévy processes

Given an -stable law , we define the corresponding -stable Lévy process to be the càdlàg process with independent stationary increments such that .

The next result shows how to generate -stable Lévy processes with , , . For the proof, see Section 5.

Theorem 3.1

Assume the setup of Theorem 2.5. Define

Then converges weakly to in as .

By Remark 2.1 we can obtain all processes in this way.

In particular, taking , we obtain processes corresponding to the one-sided stable laws in Theorem 2.3.

As in Section 2.3, weak convergence is understood with respect to the probability . Convergence holds in the Skorohod topology on  [64, 72].

Figure 5 shows sample trajectories of Lévy processes for , and various values of using the induced deterministic dynamics.

Figure 5: Sample paths of Lévy processes with , and (top): , (middle): and (bottom): .

4 Numerical integration of SDEs using homogenisation

In this section we show how to simulate Marcus SDEs of the form (1.1) with non-Lipschitz drift and diffusion terms driven by multiplicative Lévy noise.

The case of “exact” multiplicative noise where and for some suitable function was studied in [27]. In this case, the change of coordinates leads to an SDE in terms of with constant diffusion term. In principle, can now be computed by existing methods [33, 55, 37, 60, 69, 14, 49, 41, 38] and then is recovered via the formula .

For , exactness is a very restrictive condition. Even for , the method above is not useful when vanishes as in the examples below. Hence our aim is to devise a numerical method that does not rely on exactness.

Our method in this section uses the full Thaler map for which the density in (2.3) defines a finite measure only for . Theorem 4.1 below does not hold in the infinite measure setting and hence fails for . Hence in this section we restrict to the range . (In contrast, our methods in Sections 2 and 3 involve returns to the set on which restricts to a finite measure for all .)

Throughout this section we work with the invariant probability measure corresponding to the normalised density

(4.1)

4.1 Numerical algorithm for solving SDEs

In this paper, we focus on solving SDEs of the type (1.1) in the scalar case . The theoretical basis [10] behind the method applies in general dimensions. However, in practice one would need to consider Thaler-type maps with multiple fixed points and to construct higher-dimensional processes converging to the appropriate driving Lévy process as in Section 3. Since these preliminary steps have been carried out so far only in the scalar case, we restrict to that case here.

Consider the SDE (1.1) with and where , , . Let be the Thaler map (2.1) with . We define a sequence of observables where is the mean zero observable given by

and

Here, is as in (2.4) and are independent copies of the random variable where as in Section 2.3. (In particular, the random variable gets updated only when the trajectory visits and is unchanged during the laminar phase in ).

We can now state our main result (see Section 5 for the proof).

Theorem 4.1

Let be and be for some . Fix . Let

(4.2)

where is as defined above, and set . Then converges weakly to in on the probability space as where is the solution to the Marcus SDE (1.1) with .

Remark 4.2

We refer to equation (4.2) as a fast-slow map. Indeed, in the case () Theorem 4.1 ensures that solutions of the fast-slow system

converge weakly to solutions of the SDE (1.1) on the slow time scale, i.e.  as . In the general case , there is a similar but more complicated interpretation that is used in the proof of Theorem 4.1 in Section 5.3.

Remark 4.3

The topology used for the weak convergence in Theorem 4.1 is too technical to define here and we refer to [10]. It is weaker than the

topology, but sufficiently strong to guarantee convergence in the sense of joint distributions. That is,

converges in distribution to in as for all , .

Remark 4.4

By results of [19, 75] (see in particular [11, Example 1.1]), the initial conditions can be equally well (from the theoretical point of view of Theorem 4.1) chosen using the invariant probability measure or the uniform Lebesgue measure. We have checked numerically in the case , (corresponding to generation of a Lévy process ) that convergence of the probability density at is faster if the initial conditions are drawn using .

Hence throughout this section, when applying the fast-slow map (4.2), we work with initial conditions drawn using the invariant probability measure . The explicit formula for the density in (4.1) is less helpful here due to the singular behaviour near

. To circumvent this, we propagate uniformly distributed initial conditions

under iterations of the Thaler map and then work with the initial conditions .

4.2 Numerical results for solving SDEs

To illustrate our method, we consider the dynamics of a particle in a double-well potential driven by a Lévy process

(4.3)

with drift term . We consider two specific examples with non-Lipschitz drift and diffusion terms. In the first example, our approach is in good agreement with conventional methods. The second example possesses a natural boundary which seems better treated by the deterministic method presented in this paper.

Example 1: Consider the SDE (4.3) with potential and diffusion terms

This example was considered in [43] where the stochastic forcing was a compound Poisson process. Note that both the drift and diffusion terms are non-Lipschitz. We use the parameters , , , from [43], and take for the strength of the diffusion. We take , , for the driving Lévy process .

Theorem 4.1 implies in particular convergence in distribution of to the stochastic process at fixed . We test this numerically by generating the probability density function of via (i) existing methods based on Euler-Maruyama discretisation and (ii) our theorem. The results are shown in Figure 6.

First we describe method (i). The non-Lipschitz diffusive term can be removed by the change of coordinates . The transformed SDE is

(4.4)

where the transformed drift term

now has a singularity at corresponding to . For the parameter values above, it turns out that the singularity lies outside the range where the probability density function is significantly different from zero and is relatively harmless. The transformed SDE (4.4) for can now be solved with an Euler-Maruyama type scheme with time step . To account for the non-Lipschitz drift term , we apply the taming method [37, 60], and discretise according to

where . Finally, we transform back to recover the solution to the original SDE (4.3). In Figure 6, we applied the Euler-Maruyama method with time step averaged over realisations of the driving Lévy noise, starting from an initial condition .

Method (ii) consists of applying Theorem 4.1 directly to the non-transformed SDE. Figure 6 shows the empirical distribution of averaged again over realisations (as explained in Remark 4.4) for various values of with initial condition . The convergence of the probability density function obtained by iterating the fast-slow map (4.2) and Theorem 4.1 is clearly seen.

Example 2: Consider now the SDE (4.3) with potential and diffusion terms

We take , , for the driving Lévy process . There is a natural boundary at : for the stochastic process remains strictly positive for all times with probability . This is readily seen by writing the SDE as where and . Since the Marcus integral satisfies the standard laws of calculus, solutions satisfy . Hence the sign of the initial condition is preserved.

Again, we compare the two methods (i) Euler-Maruyama and (ii) Theorem 4.1. As shown below, Euler-Maruyama fails to deal adequately with the natural boundary at , whereas Theorem 4.1 respects this boundary.

To apply Euler-Maruyama, we again start by removing the non-Lipschitz diffusion term via the change of coordinates . The transformed SDE is

(4.5)

When discretising the transformed SDE (4.5) using an Euler-Maruyama scheme, however, large increments lead to spurious crossings of the natural boundary at . This does not occur for our deterministic method applying Theorem 4.1 directly to the non-transformed SDE. We show in Figure 7 the probability density function of obtained by considering the empirical distribution of for several values of . We compute the latter by averaging over realisations for various values of with initial condition . The corresponding probability density function for an Euler-Maruyama discretisation with time step is shown as well. Whereas the empirical density obtained from the fast-slow map converges to a unimodal probability density function, the probability density function obtained from the Euler-Maruyama discretisation exhibits significant leakage into the region .

Figure 6: Probability density function for the solution to the SDE in Example 1 at fixed time . Results for the fast-slow map (4.2) are shown for several values of  and are compared with Euler-Maruyama discretisation.
Figure 7: Probability density function for the solution to the SDE in Example 2 at fixed time . Results for the fast-slow map (4.2) are shown for several values of  and are compared with Euler-Maruyama discretisation. The inset shows a zoom near the natural boundary at for the probability density function obtained from the fast-slow map (4.2).

We end with a few comments on numerical issues when iterating the fast-slow map (4.2). The smallness of requires long simulations as the convergence is on the slow time scale . As a result, the fast dynamics may get trapped on a spurious periodic orbit, caused by the discreteness of floating numbers. To avoid this, we occasionally add a normally distributed random number with mean zero and variance (computed ). This perturbation is added each time the fast orbit enters the hyperbolic region and has undergone at least iterations after the previous perturbation – this ensures that the superdiffusive statistics are not altered by the addition of the small perturbation.

4.3 Numerical results on the stationary density and the auto-correlation function

Moving beyond the theoretical justification provided by Theorem 4.1, in this subsection we show that our method is furthermore able to provide a good approximation for the stationary density as estimated from large simulations as well as capturing temporal statistics.

Figure 8 shows the stationary density for the SDE in Example 1. Again we compare (i) Euler-Maruyama discretisation and (ii) Theorem 4.1. For Euler-Maruyama, we take and generate a time series which is sampled every time units for a total of time units. The results from the deterministic fast-slow map (4.2) are shown to converge as decreases although there are spurious narrow peaks to the left and right of the large peaks associated with the minima of the potential . The spurious peaks decrease in size and move further away from the relevant part of the stationary measure as decreases. They are caused by unstable fixed points of the fast-slow map (4.2) which converge to as .

Figure 9 shows the stationary density in Example 2 obtained from using the fast-slow map (4.2) for large for several values of . The plots were generated to reach to times time units, sampled every steps. We show the relevant part of the stationary density as well as the tails at and . We again observe spurious narrow peaks in the tails caused by the fixed points of the slow map with which are the values of on where the fast dynamics spends most of its time. These fixed points are given by

with . Hence as . We remark that the Euler-Maruyama discretisation leads to a bimodal stationary density, rather than to a unimodal stationary density with support .

Figure 8: Stationary density for the SDE in Example 1. Results for the fast-slow map (4.2) are shown for several values of and are compared with Euler-Maruyama discretisation.
Figure 9: Stationary density for the SDE in Example 2. Results for the fast-slow map (4.2) are shown for several values of . Left: Relevant range. Right: Close-up of the spurious peaks for the stationary density computed using (4.2).

Moreover, our method is able to resolve temporal statistics of the underlying SDE. In Figure 10, we compute the normalised auto-correlation function

of solutions to the SDE in Example 1 using the fast-slow map (4.2) for various values of . It is seen that the auto-correlation function converges to the reference auto-correlation function estimated from the time series obtained using the Euler-Maruyama method. The auto-correlation function is estimated using the same data used to obtain Figure 8. We remark that whereas a time step of was sufficient to obtain the stationary density shown in Figure 8 using the Euler-Maruyama discretisation, the estimation of the auto-correlation function requires a smaller time step of , making Euler-Maruyama schemes more costly if resolving temporal statistics is required.

Figure 10: Auto-correlation function of solutions for the SDE in Example 1 estimated from the fast-slow map (4.2) for several values of , and from a direct discretisation using the Euler-Maruyama method as a reference.

5 Proof of convergence of the algorithms

In this section we prove Theorems 2.32.53.1 and 4.1.

5.1 Background on Gibbs-Markov maps

We begin by defining the notion of Gibbs-Markov map following [1, 2, 3]. Suppose that is a probability space with an at most countable measurable partition and let be a measure-preserving transformation. We say that is full-branch if is a measurable bijection for each .

For , define the separation time to be the least integer such that and lie in distinct partition elements in . It is assumed that the partition separates trajectories, so if and only if .

Definition 5.1

A full-branch measure-preserving transformation map is called a Gibbs-Markov map if it satisfies the following bounded distortion condition: There exist constants , such that the potential function satisfies

for all , .

An observable is locally constant if is constant on partition elements.

Theorem 5.2 (Aaronson & Denker)

Let be a Gibbs-Markov map with probability measure and let be a locally constant observable. Suppose that

where , , .

Then

on the probability space , where

with as in (2.5), and

Proof.

This is a special case of [2]. ∎

Remark 5.3

The constraints on in Theorem 5.2 guarantee that lies in the domain of the stable law . That is, if are i.i.d. copies of , then as . Theorem 5.2 guarantees that this remains true even though the increments are not independent in general.

Theorem 5.4 (Tyran-Kamińska)

Assume the set up of Theorem 5.2 and define the sequence of càdlàg processes on the probability space . Then in the Skorokhod -topology on as .

Proof.

This is a special case of [70]. ∎

5.2 Induced Thaler maps

Let be a Thaler map as defined in (2.1) with parameter . For each , there is a unique (up to scaling) -finite absolutely continuous invariant measure with density as in (2.3), and is finite if and only if .

Let . We consider the first return time and the first return map ,

We refer to as the induced Thaler map. The probability measure is -invariant and ergodic.

Proposition 5.5

For each , we have as where and .

Proof.

Step 1: Let be the decreasing sequence in , such that , . Note that on . Let be the inverse of this branch and write

Then