## 1 Introduction

In various applications researchers often encounter cases involving dependent observations over time or space. Dependence properties of a random process are usually characterized by the asymptotic behaviour of its covariance function. In particular, a stationary random process , is called weakly (short-range) dependent if its covariance function is integrable, i.e. . On the other hand, possesses strong (long-range) dependence if its covariance function decays slowly and is non-integrable. An alternative definition of long-range dependence is based on singular properties of the spectral density of a random process, such as unboundedness at zero (see Doukhan et al. (2002); Souza (2008); Leonenko and Olenko (2013)).

Long-range dependent processes play a significant role in a wide range of areas, including finance, geophysics, astronomy, hydrology, climate and engineering (see Leonenko (1999); Ivanov and Leonenko (1989); Doukhan et al. (2002)

). In statistical applications, long-range dependent models require developing new statistical methodologies, limit theorems and parameter estimates compared to the weakly dependent case (see

Ivanov and Leonenko (1989); Worsley (1994); Leonenko and Olenko (2013); Beran et al. (2013)).In statistical inference of random fields, limit theorems are the central topic. These theorems play a crucial role in developing asymptotic tests in the large sample theory. The central limit theorem (CLT) holds under the classical normalisation when the summands or integrands are weakly dependent random processes or fields. This result was proved by Breuer and Major (1983) for nonlinear functionals of Gaussian random fields. A generalisation for stationary Gaussian vector processes was obtained in de Naranjo (1995), for integral functionals of Gaussian processes or fields in Chambers and Slud (1989), Hariz (2002), Leonenko and Olenko (2014), for quasi-associated random fields under various conditions in Bulinski et al. (2012) and Demichev (2015). Some other CLTs for functionals of Gaussian processes or fields can be found in Doukhan and Louhichi (1999); Coulon-Prieur and Doukhan (2000) and Kratz and Vadlamani (2018).

The non-central limit theorems arise in the presence of long-range dependence. They use normalising coefficients different than in the CLT and have non-Gaussian limits. These limits are known as Hermite type distributions. A non-Gaussian asymptotic was first obtained in Rosenblatt (1961) as a limit for quadratic functionals of stationary Gaussian sequences. The article Taqqu (1975)

continued this research and investigated weak limits of partial sums of Gaussian processes using characteristic functions. The Hermite processes of the first two orders were used. Later on,

Dobrushin and Major (1979) and Taqqu (1979) established pioneering results in which asymptotics were presented in terms of multiple Wiener-Itô stochastic integrals. A generalisation for stationary Gaussian sequences of vectors was obtained in Arcones (1994) and Major (2019). Multivariate limit theorems for functionals of stationary Gaussian series were addressed under long-range dependence, short-range dependence and a mixture of both in Bai and Taqqu (2013). The asymtotices for Minkowski functionals of stationary and isotropic Gaussian random fields with dependent structures were studied in Ivanov and Leonenko (1989). Leonenko and Olenko (2014) obtained the limit theorems for sojourn measures of heavy-tailed random fields (Student and Fisher-Snedecor) under short or long-range dependence assumptions. Excellent surveys of limit theorems for shortly and strongly dependent random fields can be found in Anh et al. (2015); Doukhan et al. (2002); Ivanov and Leonenko (1989); Leonenko (1999); Spodarev (2014).The reduction theorems play an important role in studying the asymptotics for random processes and fields. These theorems show that the asymptotic distributions for functionals of random processes or fields coincide with distributions of other functionals that are much simpler and easier to analyse. The CLT can be considered as the “extreme” reduction case, when, due to weak dependence and despite the type of functionals and components distributions, asymptotics are reduced to the Gaussian behaviour. The classical non-central limit theorems are based on another “proper” reduction principle, when the asymptotic behaviour is reduced only to the leading Hermite term of nonlinear functionals. Recently, Olenko and Omari (2019) proved the reduction principle for functionals of strongly dependent vector random fields. Components of such vector fields can possess different long-range dependences. It was shown that, in contrast to the scalar cases, the limits can be degenerated or can include not all leading Hermite terms.

The available literature, except a few publications, addresses limit theorems and reduction principles for functionals of weakly or strongly dependent random fields separately. For scalar-valued random fields it is sufficient as such fields can exhibit only one type of dependence. However, for vector random fields there are various cases with different dependence structures of components. Such scenarios are important when one aggregates spatial data with different properties. For example, brain images of different patients or GIS data from different regions. Another reason for studying such models is constructing scalar random fields by a nonlinear transformation of a vector field. This approach was used to obtain non-Gaussian fields with some desirable properties, for example, skewed or heavy tailed marginal distributions, see Example 1, Theorem 5 and Leonenko and Olenko (2014).

This paper considers functionals of vector random fields which have both strongly and weakly dependent components. The results in the literature dealt with cases where the interplay between terms at the Hermite rank level and the memory parameter (covariance decay rate) of a Gaussian field completely determines the asymptotic behavior. This paper shows that in more general settings terms at non-Hermite rank levels can interplay with the memory parameter to determine the limit. As an application of the new reduction principle we provide some limit theorems for vector random fields. In particular, we show that it is possible to obtain non-Gaussian behaviour for the first Minkowski functional of the Student random field built on different memory type components. It contrasts to the known results about the cases of same type memory components in Leonenko and Olenko (2014) where, despite short or long range dependence, only the Gaussian limit is possible.

The remainder of the paper is organised as follows. In Section 2 we outline basic notations and definitions that are required in the subsequent sections. Section 3 presents assumptions and main results for functionals of vector random fields with strongly and weakly dependent components. Sections 4 gives the proofs. Section 5 demonstrates some numerical studies. Short conclusions and some new problems are presented in Section 6.

## 2 Notations

This section presents basic notations and definitions of the random field theory and multidimensional Hermite expansions. Also, we introduce the definition and basic properties of the first Minkowski functional (see Adler and Taylor (2009)). Denote by and the Lebesgue measure and the Euclidean distance in , respectively. The symbol

denotes constants that are not important for our exposition. Moreover, the same symbol may be used for different constants appearing in the same proof. We assume that all random fields are defined on the same probability space

.###### Definition 1.

Bingham et al. (1989) A measurable function is slowly varying at infinity if for all ,

A real-valued random field , satisfying is said to be homogeneous and isotropic if its mean function is a constant and the covariance function depends only on the Euclidean distance between and .

Let , , be a measurable mean square continuous zero-mean homogeneous isotropic real-valued random field (see Ivanov and Leonenko (1989); Leonenko (1999)) with the covariance function

where and is the Bessel function of the first kind of order . The finite measure is called the isotropic spectral measure of the random field , .

The spectrum of the random field is absolutely continuous if there exists a function , , such that

The function is called the isotropic spectral density of the random field .

A random field with an absolutely continuous spectrum has the following isonormal spectral representation

where

is the complex Gaussian white noise random measure on

.Let be a Jordan-measurable compact connected set with , and contains the origin in its interior. Also, assume that , , is the homothetic image of the set , with the centre of homothety in the origin and the coefficient , that is, .

###### Definition 2.

The first Minkowski functional is defined as

where is the indicator function and is a constant.

In the following we will use integrals of the form with various integrable Borel functions . Let two independent random vectors and in

be uniformly distributed inside the set

. Consider a function . Then, we have the following representation(2.1) |

where , , denotes the density function of the distance between and .

Using (2) for and one obtains for

(2.2) |

Let be a -dimensional zero-mean Gaussian vector and , , , be the Hermite polynomials, see Taqqu (1977).

Consider

where , and all for

The polynomials form a complete orthogonal system in the Hilbert space

where

An arbitrary function admits an expansion with Hermite coefficients , given as the following:

where and

###### Definition 3.

The smallest integer such that for all , , but for some is called the Hermite rank of and is denoted by .

In this paper, we consider Student random fields which are an example of heavy-tailed random fields. To define such fields, we use a vector random field , , with where ,

, are independent homogeneous isotropic unit variance Gaussian random fields.

###### Definition 4.

The Student random field (t-random field) , is defined by

## 3 Reduction principles and limit theorems

In this section we present some assumptions and the main results. We prove a version of the reduction principle for vector random fields with weakly and strongly dependent components.

In the following we will use the notation

for a vector random field with components.

###### Assumption 1.

Let be a vector homogeneous isotropic Gaussian random field with independent components, and a covariance matrix such that and

where , and are unit matrices of size , and , respectively, , , are slowly varying functions at infinity.

###### Remark 1.

If Assumption 1 holds true the diagonal elements of the covariance matrix are integrable for the first elements of , which corresponds to the case of short-range dependence, and non-integrable for the other elements, which corresponds to the case of long-range dependence. For simplicity, this paper investigates only the case of uncorrelated components.

###### Remark 2.

Consider the following random variables:

and

where are the Hermite coefficients and is the Hermite rank of the function . Then

###### Remark 3.

The random variable is correctly defined, finite with probability 1 and in the mean square sense, see §3, Chapter IV in Gihman and Skorokhod (2004).

We will use the following notations. Consider the set

Let

Note that and there are cases when can be reached at multiple . Therefore, we define the sets

and

Also, we define the random variable

The random variable if and only if .

Theorem 1 in Olenko and Omari (2019) gives a reduction principle for vector random fields with strongly dependent components. The following result complements it for the case of random fields with strongly and weakly dependent components.

###### Theorem 1.

Suppose that a the vector random field , , satisfies Assumption 1, and there is at least one such that . If for a limit distribution exists for at least one of the random variables

then the limit distribution of the other random variable exists as well, and the limit distributions coincide. Moreover, the limit distributions of

are the same.

###### Remark 4.

It will be shown in the proof that the assumptions of Theorem 1 guarantee that .

###### Remark 5.

###### Assumption 2.

Components , , of have the spectral density , , such that

where

Denote the Fourier transform of the indicator function of the set

byLet us define the following random variable

(3.2) |

where is the Wiener measure on and denotes the multiple Wiener-Itô integral.

###### Theorem 2.

A popular recent approach to model skew distributed random variables is a convolution , where is Gaussian and is continuous positive-valued independent random variables. In this case the probability density of has the form , where is the pdf of and is the cdf of , which controls the skewness, see Arellano-Valle and Genton (2005); Azzalini and Capitanio (2014) and Amiri et al. (2019). This approach can be extended to the case of random fields as , , resulting in with skewed marginal distributions. In the example below we use and show that contrary to the reduction principle for strongly dependent vector random fields in Olenko and Omari (2019) it is not enough to request . The assumption of the existence of satisfying in Theorem 1 is essential.

###### Example 1.

Let , and . In this case and , but . So, the assumption of Theorem 1 does not hold and

which is indeed different from the Gaussian limit that is expected for the case .

To address situations similar to Example 1 and investigate wider classes of vector field we introduce the following modification of Assumption 1.

###### Assumption 1.

Let , , be a vector homogeneous isotropic Gaussian random field with independent components, and a covariance matrix such that and

where , and , .

###### Remark 6.

Under Assumption 1 the components are still strongly dependent, but , , do not necessarily preserve strong dependence. If the Hermite polynomials of become weakly dependent.

The following modifications of , , and will be used to match Assumption 1:

and

In the following we consider only the cases The case when the sum equals requires additional assumptions, see Section 6, and will be covered in other publications.

Now, we are ready to formulate a generalization of Theorem 1.

###### Theorem 3.

Suppose that a vector random field , , satisfies Assumption 1 and . If a limit distribution exists for at least one of the random variables

then the limit distribution of the other random variable exists as well, and the limit distributions coincide when .

###### Assumption 2.

Components , , of have spectral densities , , such that

###### Theorem 4.

###### Corollary 1.

###### Remark 7.

It is possible to obtain general versions of Theorems 2 and 4 by removing the assumptions about and and requesting only or respectively. However, it requires an extension of the known non-central limit theorems for vector fields from the discrete to continuous settings, see Section 6. Also, in such general cases the summands in the limit random variables analogous to (3.3) would be dependent.

As an example we consider the first Minkowski functional of Student random fields. The special cases of only weakly or strongly dependent components were studied in Leonenko and Olenko (2014). It was shown that in the both cases the asymptotic distribution is , but with different normalisations, see Theorems 3 and 6 in Leonenko and Olenko (2014). Figure 1 gives a two-dimensional excursion set above the level for a realisation of a long-range dependent Cauchy model. The excursion set is shown in black colour. More details are provided in Section 5.

The next result shows that for the first Minkowski functional of t-fields obtained from vector random fields with both weakly and strongly dependent components the limit distributions can be non-Gaussian.

###### Theorem 5.

Let Assumption 2 hold true, , , . Then the random variable

converges in distribution to the random variable

where are independent copies of the random variable

and is the signum function.

###### Remark 8.

Random variables have the Rosenblatt-type distribution, see Anh et al. (2015).

###### Remark 9.

As , the first component is weakly dependent and the remaining components , , are strongly dependent.

## 4 Proofs of the results from Section 3

###### Proof of Theorem 1.

First we study the behaviour of . Note, that

Let us denote the sets as follows

and

Then and can be written as

Note, that all components , , in the first term are weakly dependent and the variance is equal

Let and . The Jacobian of this transformation is . By denoting , , then can be rewritten as

It follows from and by Remark 5 that for weakly dependent components we get

Noting that

one obtains the following asymptotic behaviour of

(4.1) |

In contrast, the components , , in the second term are strongly dependent and can be obtained as follows

Comments

There are no comments yet.