# On the limit behavior of iterated equilibrium distributions for the Gamma and Weibull families

In this paper, we study the evolution of iterated equilibrium distributions for the Gamma and Weibull families of distributions as the iteration step increases. We characterize their moments and the pointwise limit of the distribution functions corresponding to the iterated distributions. As a byproduct, we obtain approximations for higher order moments of the residual lifetime.

• 3 publications
• 1 publication
• 4 publications
07/30/2018

### Cumulative distribution functions for the five simplest natural exponential families

Suppose that the distribution of X_a belongs to a natural exponential fa...
03/21/2022

### Modified Method of Moments for Generalized Laplace Distributions

In this short note, we demonstrate the failure of the classic method of ...
05/21/2020

### Matrix moments of the diffusion tensor distribution

Purpose: To facilitate the implementation/validation of signal represent...
01/28/2020

### Discriminating between and within (semi)continuous classes of both Tweedie and geometric Tweedie models

In both Tweedie and geometric Tweedie models, the common power parameter...
07/19/2022

### Deep equilibrium networks are sensitive to initialization statistics

Deep equilibrium networks (DEQs) are a promising way to construct models...
02/03/2018

### Distance Metrics for Gamma Distributions

Here I present the analytic form of two common distance metrics, the sym...
02/07/2018

### Statistical tests for daily and total precipitation volumes to be abnormally extremal

In this paper, two approaches are proposed to the definition of abnormal...

## 1 Introduction

Iterated distributions were introduced by Averous and Meste [2], in order to construct a classification of lifetime distributions, and were initially studied in a more systematic way by Fagiuoli and Pellerey [6]. The iteration procedure described below produces what is also known as equilibrium distributions in economics or actuarial activities, since they describe the distribution of the first drop below the initial reserve. Moreover, equilibrium distributions play an important role in ageing relations (see, for example, Chatterjee and Mukherjee [3]) or in renewal theory (see Cox [5]

). A suitable representation of the iterated distributions describes their tails as normalized moments of stop-loss risk premiums with a given deductible, a common type of contract in actuarial activity, with interest to the characterization of ruin probabilities and insolvency. We obtain a characterization of the asymptotics of these moments, with respect to their order, for initial random variables with distribution in the Gamma or Weibull families. These results provide simple numerical approximations for the stop-loss premiums.

Let us describe our framework and introduce the basic notation and transformations. We will be assuming throughout that is a nonnegative random variable with density function , distribution function , and tail function .

###### Definition 1

For each , define

 ¯¯¯¯TX,0(x)=fX(x)and˜μX,0=∫∞0¯¯¯¯TX,0(t)dt=1. (1)

For each , define the iterated distribution induced by , by its tail as follows:

 ¯¯¯¯TX,s(x)=1˜μX,s−1∫∞x¯¯¯¯TX,s−1(t)dtwhere˜μX,s=∫∞0¯¯¯¯TX,s(t)dt, (2)

assuming the integrals above are finite.

The distribution is known as the equilibrium distribution of . Hence, the iteration process above defines, for each , the equilibrium distribution of a random variable with tail . Thus, the iteration procedure given in Definition 1 may be restated in terms of the iterated equilibrium distribution.

###### Definition 2

The iterated equilibrium distribution of has tail given by , that is, the iterated distribution induced by described in Definition 1.

Taking into account this identification, we shall refer throughout to iterated distributions instead of iterated equilibrium, as these only differ on the count of the iteration steps.

It is easily verified that the iterated distribution induced by an exponential random variable

is exactly the same exponential distribution as

. That is, the exponential distributions are fixed points with respect to the iteration procedure introduced in Definition 1. Moreover, it is also easily verified that explicit identification of the iterated distributions may become in general quite complex once we leave the case of being exponentially distributed.

As already mentioned, iterated distributions were used by Averous and Meste [2]

to classify the distribution of

with respect to its tail behavior. General properties of the iterated distributions were studied in Fagiuoli and Pellerey [6] and Nanda et al. [10], but to the best of our knowledge, the behavior of the iterated distributions with respect to the iteration step has not been studied before. In this paper, we will be mainly interested on this behavior of iterated distributions as the iteration step grows.

Although the iterated distributions are defined in a recursive way, the following theorem gives a useful closed form representation.

###### Theorem 3 (Lemma 2 and Remark 3 in Arab and Oliveira [1])

Assume is an absolutely continuous nonnegative random variable with finite moment of order for . Then the iterated tail may be represented as

 ¯¯¯¯TX,s(x)=1EXs−1∫∞xfX(t)(t−x)s−1dt=1EXs−1E(X−x)s−1+, (3)

where is the residual lifetime at age .

###### Remark 4

It follows from (3) that the iterated distribution may be interpreted as a normalized survival moment of order . It is also worth mentioning that (3) means that, up to a normalizing factor, the iterated distribution is the stop-loss transform of order , , a common quantity of interest in actuarial models (see, for example, Cheng and Pai [4], Nair et al. [8], Tsai [12] or Rachev and Rüschendorf [11], among many other references).

Although Theorem 3 provides a closed representation for , it does not seem to be very helpful for actual calculations. An illustrative example, that we will be exploring in detail later, is obtained by assuming that has distribution in the Gamma or Weibull families, which are important classes of distributions in many different research fields such as reliability theory.

The paper is organized as follows: In Section 2, we provide a formula for the higher order moments of the iterated distributions. In Section 3, a recursive representation for the high order iterated distributions of a convolution is obtained and this result is used to find an expression for the iterated distributions of a random variable that is Gamma distributed with integer shape parameter. In Section 4, we derive an explicit expression for the iterated distribution of a Gamma distributed random variable, still assuming the shape parameter is an integer and furthermore, we study the limit behavior of the iterated distribution as the iteration step tends to infinity, now for general shape parameter. Finally in Section 5, we identify the limit behavior for the iterated distributions induced from the Weibull family of distributions.

## 2 Moments of iterated distributions

We shall start by a characterization of the moments of iterated distributions. First, remark that the normalizing constants in (2) are obviously the mathematical expectation of the iterated distributions. These first order moments have been characterized, in terms of moments of the inducing random variable, in Corollary 2.1 by Nanda et al. [9]:

 ˜μX,s=1sEXsEXs−1. (4)

The following result extends (4), characterizing higher order moments of the iterated distributions.

###### Proposition 5

Assume is an absolutely continuous nonnegative random variable with finite moment of order , where and . Then the iterated distribution induced by has a finite moment of order given by

 μs,m=(m+s−1m)−1EXm+s−1EXs−1.

Proof. It follows from (2) that the iterated distribution has density . So, using the first representation for in (3), we have

 μs,m=s−1EXs−1∫∞0∫∞xxm(t−x)s−2f(t)dtdx.

By inverting the integration order, we easily find that

 μs,m=s−1EXs−1∫∞0∫t0xm(t−x)s−2f(t)dxdt=s−1EXs−1Γ(m+1)Γ(s−1)Γ(m+s)∫∞0tm+s−1f(t)dt,

where is the Euler function, which leads to the desired result.

The following description for the variance of iterated distributions is now straightforward.

###### Corollary 6

Assume is an absolutely continuous nonnegative random variable with finite moment of order . The variance of the iterated distribution induced by X is

 σ2s=1sEXsEXs−1(2s+1EXs+1EXs−1sEXsEXs−1).

As mentioned earlier, the exponential distributions are fixed points for the iteration procedure, so one could expect to have iterated distributions converging to the exponential. The above corollary shows that this may not happen. Indeed, it follows immediately that if the quotient is bounded with respect to , then , implying that the equilibrium distributions are, in such cases, converging to a degenerate distribution. We will prove later that the Weibull distributions satisfy this asymptotic degeneracy.

## 3 Iterated distribution of convolutions

Summing independent random variables is a common way to introduce new families of distributions, appearing as convolution powers built upon some initial distribution. With this in mind, we shall derive a characterization for the iterated distribution for the -th convolution power based on expressions for the -th convolution power. For this purpose, it is convenient to describe the distribution in terms of their densities rather than using distribution functions. Of course, as follows from Definition 1, the density for the iterated distribution is, up to multiplication by a constant, the tail for the iterated distribution:

 fs(x)=1˜μX,s−1¯¯¯¯TX,s−1(x)=(s−1)E(X−x)s−2+EXs−1=(s−1)EXs−1∫∞x(t−x)s−2f(t)dt. (5)

In the sequel, we will be representing the convolution of density functions by defined as follows:

 f∗g(x)=∫x0f(t)g(x−t)dt.

Moreover, the expression will represent the -th convolution power of a density function .

###### Theorem 7

Let be nonnegative i.i.d random variables with the same distribution as and define . For every , the density of the iterated distribution induced by is given by

 fn∗s(x)=μ(n−1)∗s−1μn∗s−1f∗f(n−1)∗s(x)+1μn∗s−1s−1∑ℓ=1(s−1ℓ)μ(n−1)∗s−ℓ−1μℓfℓ+1(x), (6)

where and .

Proof. Using (5) for the -th convolution power and reversing the integration order, we find that

 fn∗s(x) = (s−1)μn∗s−1∫∞x(t−x)s−2fn∗(t)dt (7) = (s−1)μn∗s−1∫∞x(t−x)s−2(∫t0f(u)f(n−1)∗(t−u)du)dt = (s−1)μn∗s−1∫x0f(u)(∫∞xf(n−1)∗(t−u)(t−x)s−2dt)du +(s−1)μn∗s−1∫∞xf(u)(∫∞uf(n−1)∗(t−u)(t−x)s−2dt)du = (s−1)μn∗s−1∫x0f(u)(∫∞x−uf(n−1)∗(v)(v+u−x)s−2dv)du +(s−1)μn∗s−1∫∞xf(u)(∫∞0f(n−1)∗(t)(t+u−x)s−2dt)du = I1+I2.

We rewrite as

 I1=(s−1)μn∗s−1∫x0f(u)μ(n−1)∗s−1(s−1)f(n−1)∗s(x−u)du=μ(n−1)∗s−1μn∗s−1f∗f(n−1)∗s(x). (8)

As what regards , we have that

 I2 = (s−1)μn∗s−1∫∞xf(u)(∫∞0f(n−1)∗(t)(t+u−x)s−2dt)du (9) = (s−1)μn∗s−1∫∞xf(u)(s−2∑ℓ=0(s−2ℓ)(u−x)ℓ∫∞0ts−2−ℓf(n−1)∗(t)dt)du = (s−1)μn∗s−1∫∞xf(u)(s−2∑ℓ=0(s−2ℓ)(u−x)ℓμ(n−1)∗s−2−ℓ)du = (s−1)μn∗s−1s−2∑ℓ=0(s−2ℓ)μ(n−1)∗s−2−ℓ∫∞x(u−x)ℓf(u)du = (s−1)μn∗s−1s−2∑ℓ=0(s−2ℓ)μ(n−1)∗s−2−ℓμℓ+1ℓ+1fℓ+2(x) = 1μn∗s−1s−1∑ℓ=1(s−1ℓ)μ(n−1)∗s−1−ℓμℓfℓ+1(x).

The conclusion (6) now follows by combining (7), (8) and (9).
The result above provides a recursive formula for the high order iterated distributions of the convolution . A simple application is given next, identifying explicitly the iterated distributions induced by integer shape parameter Gamma distributions.

###### Example 8

The is the 2-nd convolution power of the exponential distribution with hazard rate , whose density function is , for . With the notation introduced above, the density function of the distribution is represented as , hence the iterated distribution induced by the distribution has density , and we may use (6) to obtain a recursive representation.

As the exponential is a fixed point for the iterative procedure introduced in Definition 1, we have , therefore is the moment of order of an exponential random variable with hazard rate , that is, . Moreover, as the 2-nd convolution power is the , we also know that . Furthermore,

 ∫x0f(t)f1∗s(x−t)dt=∫x0λe−λtλe−λ(x−t)dt=λ2∫x0e−λxdt=λ2xe−λx

and

 1μn∗s−1s−1∑ℓ=1(s−1ℓ)μ(n−1)∗s−ℓ−1μℓfℓ+1(x) = λs−1s!s−1∑ℓ=1(s−1ℓ)(s−ℓ−1)!λs−ℓ−1ℓ!λℓλe−λx = λ(s−1)se−λx.

Hence, the iterated distribution induced by the distribution has density

 gs(x)=f2∗s(x)=1sλ2xe−λx+s−1sf(x). (10)

This approach may be extended to general Gamma distributions with integer valued shape parameter. The argument goes along the same line, needing some extra effort on the manipulation of some combinatorial sums. We first state an auxiliary combinatorial lemma that is useful for obtaining the results that follow.

###### Lemma 9

For every , we have that

 m∑j=0(k+jk)=(k+m+1m).

Proof. This follows easily by induction on .

###### Proposition 10

Let be a random variable with distribution , and be the density of an exponential distribution with hazard rate . For every , the density function of the iterated distribution induced by is

 gs(x)=fn∗s(x)=n−1n+s−2f∗f(n−1)∗s(x)+s−1n+s−2f(x). (11)

Proof. As , we will use the recursive representation from Theorem 7. Note that the moments of convolutions are just the moments of order of the distribution. Hence

 μn∗s−1=1λs−1(n+s−2)!(n−1)!andμ(n−1)∗s−1μn∗s−1=n−1n+s−2. (12)

Furthermore,

 1μn∗s−1s−1∑ℓ=1(s−1ℓ)μ(n−1)∗s−ℓ−1μℓfℓ+1(x) (14) =(n−1)!λs−1(n+s−2)!s−1∑ℓ=1(s−1ℓ)Γ(n+s−ℓ−2)Γ(n−1)λs−ℓ−1ℓ!λℓλe−λx =(n−1)λe−λxs−1∑ℓ=1(s−1ℓ)ℓ!(n+s−ℓ−3)!(n+s−2)! =(n−1)λe−λxn+s−2s−1∑ℓ=1(s−1)!(s−1−ℓ)!(n+s−(ℓ+3))!(n+s−3)! =(n−1)λe−λx(n+s−2)1(n+s−3n−2)s−1∑ℓ=1(n+s−ℓ−3n−2) =(s−1)(n+s−2)f(x),

where the last equality follows by applying Lemma 9. The proof is concluded by rewriting (6) taking into account (12) and (14).

The representation just proved in Proposition 10 allows for a first characterization of the behavior of the iterated distributions as the iteration step goes to .

###### Corollary 11

Let be a random variable with distribution . Then, for each fixed, , where is the density of an exponential distribution with hazard rate .

Proof. The proof follows easily by observing that the convolution that appears in the first term on the right hand side of (11) is bounded by . Allowing to tend to we have the desired result.

One could prefer to have a recursive characterization for the density of the iterated distribution involving only elementary operations, that is, avoiding convolutions on the recursive expression. This can be obtained by iteratively using (6) to describe the -th convolution power that appears in the right-hand side of (6). We apply this technique for random variables that are Gamma distributed and this leads to a long algebraic manipulation proving the representation given below. Note that the iterated distributions induced by the Gamma family will be studied with a more explicit approach in the next section.

###### Proposition 12

Let be a random variable with distribution , with , and let . Then,

 μn∗s−1fn∗s(x)−μ(n−1)∗s−1f(n−1)∗s(x) = (s−1)!λs−1(λn−1xn−2(n−2)!e−λx(λxn−1−1)+1 −(n+s−4s−2)+e−λxn−2∑k=2(s+k−2k)λn−kxn−k−1(n−k−1)!).

Proof. In order to obtain an alternative to the characterization in Proposition 10 without convolutions, let us pick up from the expression in Theorem 7 which can be written as

 μn∗s−1fn∗s(x)=μ(n−1)∗s−1f∗f(n−1)∗s(x)+s−1∑ℓ=1(s−1ℓ)μ(n−1)∗s−ℓ−1μℓfℓ+1(x). (15)

We may rewrite this for the -fold convolution, to get

 μ(n−1)∗s−1f(n−1)∗s(x)=μ(n−2)∗s−1f∗f(n−2)∗s(x)+s−1∑ℓ=1(s−1ℓ)μ(n−2)∗s−ℓ−1μℓfℓ+1(x).

The first term on the right in (15) may be written as , so it can be replaced by the previous expression to find, after a rearrangement of the terms:

 μn∗s−1fn∗s(x)=μ(n−2)∗s−1f2∗∗f(n−2)∗s(x)+s−1∑ℓ=1(s−1ℓ)μℓ(μ(n−2)∗s−ℓ−1f∗fℓ+1(x)+μ(n−1)∗s−ℓ−1fℓ+1(x)).

Again, we apply (15) to the first term on the right-hand side above. So, iterating this substitution, we finally get the representation:

 μn∗s−1fn∗s(x)=μs−1f(n−1)∗∗fs(x)+s−1∑ℓ=1(s−1ℓ)μℓ(μs−ℓ−1f(n−2)∗(x)+μ2∗s−ℓ−1f(n−3)∗(x)+⋯+μ(n−2)∗s−ℓ−1f(x)+μ(n−1)∗s−ℓ−1)∗fℓ+1(x). (16)

We may, of course, rewrite the representation above for -fold convolution:

 μ(n−1)∗s−1f(n−1)∗s(x)=μs−1f(n−2)∗∗fs(x)+s−1∑ℓ=1(s−1ℓ)μℓ(μs−ℓ−1f(n−3)∗(x)+μ2∗s−ℓ−1f(n−4)∗(x)+⋯+μ(n−3)∗s−ℓ−1f(x)+μ(n−2)∗s−ℓ−1)∗fℓ+1(x), (17)

from which follows that

 μn∗s−1fn∗s(x)−μ(n−1)∗s−1f(n−1)∗s(x) =μs−1(f(n−1)∗−f(n−2)∗)∗fs(x) +s−1∑ℓ=1(s−1ℓ)μℓ(μs−ℓ−1f(n−2)∗−μ(n−2)∗s−ℓ−1)∗fℓ+1(x) +s−1∑ℓ=1(s−1ℓ)μℓ(n−2∑k=2(μk∗s−ℓ−1−μ(k−1)∗s−ℓ−1)f(n−k−1)∗)∗fℓ+1(x).

Since is the exponential density, the iterated density , hence, we may rewrite:

 μn∗s−1fn∗s(x)−μ(n−1)∗s−1f(n−1)∗s(x) =μs−1(fn∗−f(n−1)∗) +s−1∑ℓ=1(s−1ℓ)μℓ(μs−ℓ−1f(n−1)∗−μ(n−2)∗s−ℓ−1f) +s−1∑ℓ=1(s−1ℓ)μℓ(n−2∑k=2(μk∗s−ℓ−1−μ(k−1)∗s−ℓ−1)f(n−k)∗) =A1+A2+A3.

We may now compute each of these three terms. Recalling that the convolution of exponentials is a gamma density, and the expressions for the moments, it follows easily that

 A1=(s−1)!λs−1λn−1xn−2(n−2)!e−λx(λxn−1−1).

Recall that

 A2=s−1∑ℓ=1(s−1ℓ)μℓ(μs−ℓ−1f(n−1)∗−μ(n−2)∗s−ℓ−1f)=s−1∑ℓ=1(s−1ℓ)ℓ!λs−1((s−ℓ−1)!f(n−1)∗−(n+s−ℓ−4)!(n−3)!f).

So, we need to compute two summations:

 s−1∑ℓ=1(s−1ℓ)ℓ!λs−1(s−ℓ−1)!=(s−1)!λs−1.

and, using Lemma 9,

 s−1∑ℓ=1(s−1ℓ)ℓ!λs−1(n+s−ℓ−4)!(n−3)!=(s−1)!λs−1s−1∑ℓ=1(n+s−ℓ−4n−3)=(s−1)!λs−1(n+s−4s−2).

We now compute

 A3=s−1∑ℓ=1(s−1ℓ)μℓ(n−2∑k=2(μk∗s−ℓ−1−μ(k−1)∗s−ℓ−1)f(n−k)∗).

Note first that , so the summation ranges only from up to . Further, for ,

 μk∗ℓ−μ(k−1)∗ℓ=ℓλℓ(k+ℓ−2)!(k−1)!.

Replacing these expressions and inverting the summations, we find

 A3=s−2∑ℓ=1(s−1)!ℓ!(s−ℓ−1)!ℓ!λℓ(n−2∑k=2(μk∗s−ℓ−1−μ(k−1)∗s−ℓ−1)f(n−k)∗)=n−2∑k=2s−2∑ℓ=1(s−1)!(s−ℓ−2)!1λs−1(s+k−ℓ−3)!(k−1)!f(n−k)∗=(s−1)!λs−1n−2∑k=2s−2∑ℓ=1(s+k−ℓ−3k−1)f(n−k)∗=(s−1)!λs−1n−2∑k=2(s+k−2k),

using Lemma 9 for the final equality.

## 4 Iterated distributions induced by Gamma distributions

Given a random variable , the iterated distributions induced by , where , are easily related to the iterated distributions induced by . Indeed, it follows immediately from (3) that, for every and , . Therefore, in order to characterize the iterated distributions induced by a Gamma distributed random variable, it is enough to treat the case where has distribution . We first derive an explicit expression for the iterated distribution when the shape parameter is an integer.

###### Theorem 13

Assume is distributed with integer shape parameter . For every , the iterated distribution induced by has tail given by

 ¯¯¯¯TX,s(x)=e−x+e−x(α+s−2α−1)α−1∑ℓ=1(s+α−ℓ−2α−ℓ−1)xℓℓ!. (18)

Proof. We start by calculating the stop-loss transform of order for the random variable .

 E(X−x)s+ = 1Γ(α)∫∞x(t−x)stα−1e−tdt = e−xΓ(α)∫∞0us(u+x)α−1e−udu = e−x(s+α−1)!(α−1)!+e−xα−1∑k=1(s+α−1−k)!(α−1−k)!xkk!,

where the last equality follows by applying the binomial expansion. The result follows by writing

 ¯¯¯¯TX,s(x)=1EXs−1E(X−x)s−1+=(α−1