Exploration of a Cosine Expansion Lattice Scheme

07/05/2019 ∙ by Ki Wai Chau, et al. ∙ 0

In this article, we combine a lattice sequence from Quasi-Monte Carlo rules with the philosophy of the Fourier-cosine method to design an approximation scheme for expectation computation. We study the error of this scheme and compare this scheme with our previous work on wavelets. Also, some numerical experiments are performed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this work, we derive a numerical integration formula for a function

with respect to a probability measure

. Namely, we aim to approximate the following quantity:

(1.1)

The inspiration of this work comes from two promising methods in numerical integration, the COS method and lattice rules.

The COS method is a numerical integration method developed in the context of option pricing, see [10]

. Within this framework, the fair value of a financial option can be expressed as the expectation of the discounted payoff function. Using the fact that the payoff function can be expressed as a cosine series, an approximation of the integration in terms of the Fourier transform of the risk neutral measure and the cosine coefficient of the payoff function were presented. This method is efficient in terms of calculation with only simple addition operations. It also makes use of the characteristic function of a probability measure, which is available more often than the density function. Ever since, there has been numerous extensions of the COS method, including pricing Bermudan options

[11], finding ruin probability [4], solving backward stochastic differential equations [22] and computation of valuation adjustment [2].

There are also several applications for which it is necessary to extend the COS method beyond the one-dimensional situation, for example, in [21]

. However, the tensor extension used in previous studies suffers from the curse of dimensionality, with the number of summation terms increasing exponentially when the number of dimensions increases. Therefore, further input is required for such extensions to be feasible in practice.

Thus, we took the insight from Quasi-Monte Carlo (QMC) rules, which approximate integrals of the form

with equal weight quadrature rules

Readers are referred to [8], [18] and references therein for further details.

In particular, we are interested in the rank 1 lattice rule construction method for quadrature points, where for a given positive number under dimension

, one chooses a vector

called the generating vector and generates a points set:

where the notation denotes the fractional part of the real number in each dimension. The quadrature points are the result of applying some fixed function on this points set.

There have been numerous research papers supporting the implementation of lattice rule QMC, in identifying a suitable generating vector [6, 16, 24, 23], in efficient algorithms to generate such vectors [19, 20], and in extensible lattice rules [5, 9, 13, 14]. In [7], the authors derived an error bound for QMC with tent transformed lattice rules for functions within the half-period cosine space. Noting the connection between the half-period cosine space and the tensor extension of the cosine series, we are inspired to combine the two approaches and transfer the rich results from the lattice literature to Fourier expansion schemes.

This work is organized as follows. In Section 2, we present the two components of the cosine expansion lattice scheme, the half-period cosine expansion and the tent transformed lattice. We also present the full scheme in this section. In Section 3, we give further details regarding our current results by providing an alternative formation of its error bound and connecting the scheme to periodic wavelets. Numerical experiments are presented in Section 4 and we conclude our findings in Section 5.

Before we begin, we mention some conventions used in this work. We assume all the integrals in the computation to be finite, therefore Fubini’s theorem can be applied and we exchange the order of integration without notice. The operations and act component-wise when used on vectors.

Finally, some notation we use in this article:

  • The natural number set ;

  • The positive integer set , similarly for ;

  • The truncated integer set, for we write ;

  • The indicator function

  • The ceil function , .

2 Cosine Expansion Lattice Scheme

In this section, we introduce the cosine expansion lattice scheme, whose construction consists of two parts: a projection of the original integrand on a reproducing kernel space and a numerical integration technique based on a tent-transformed lattice rule. We will briefly describe the intuition behind our derivation.

2.1 The half-period cosine space

In a similar framework as the COS method from [10], we would like to define a cosine based periodic expansion of function in . This allows us to connect the expectation problem (1.1) to a Fourier transform.

The common practice to transform Equation (1.1) into a finite problem for computational purposes is restricting the domain of integration to a predefined box . This step is justified as long as contains the majority of the mass of the measure. In fact, this is equivalent to replacing the original integrand by , or setting the function values outside to zero.

The COS method uses this concept to its advantage by replacing an originally non-periodic one-dimensional function by a cosine series projection that coincides with on the domain , but is periodic throughout the whole real line.

To be precise, the integrand is replaced by111The notation denotes the first term of the summation is weighted by one half.

where

In this case, the original 1-D expectation is approximated by

Note that within , is a projection of onto a finite cosine series space and converges to in the norm, when tends to infinity. However, when considering the function on the whole domain , it is a periodic function and it thus deviates from itself. Once again, the error of this transformation is kept under control as the probability mass outside is small and we make use of the Fourier transform , which is typically available, in the approximation.

The main goal of this work is to apply the same idea in a higher-dimensional setting. Thus, we will consider a functional space built on cosine functions and use the series projection of onto such space as our replacement integrand. The space we pick is a modification to the half-period cosine space introduced in Section 2.3 of [7] and their article serves as an inspiration of the current work.

We define an inner product as

for some real number and vector , where the multi-dimensional cosine coefficients are given by

Here we define to be the number of non-zero components in . Note that only the portion of within the predefined domain is used here.

For and , the one-dimensional function is defined as:

and the multidimensional function is set as,

for and . The function is introduced to the norm here to assess the decay rate of the cosine coefficients, as it is closely related to the approximation error.

We define the corresponding norm as and denote it by . In particular, we have

The dummy variable is changed from

to here in preparation for future computations. We denote any function such that as .

The half-period cosine space is an example of a reproducing kernel Hilbert space with the corresponding reproducing kernel,

and for the one-dimensional version,

For any and , we have the reproducing property,

Readers are referred to [1] and [8] for further information on reproducing kernel Hilbert spaces.

In this work, we use an alternative kernel which drops the function. It is defined as

We suppress the part here to simplify our notation.

Using the reproducing property, we have the following equation:

(2.1)

for any and . We use the expansion at the right-hand side of (2.1) as the replacement integrand in Equation (1.1), denoted by .

Definition 2.1 (Half-period Cosine Expansion).

For any given function and given domain , the half-period cosine expansion, , is defined as

where the cosine coefficients are defined as

2.2 Lattice rule approximations

There are essentially two drawbacks when extending the COS method to higher dimensions. First, the cosine coefficient may be difficult to calculate, especially in a recurring situation. In our previous work [3], we aimed to remedy this shortcoming by adopting a wavelet basis such that we can approximate Equation (1.1) as a weighted sum of local values, in the form of

with being the wavelet basis functions. We will compare our current scheme to that work in Section 3.2.

The second point of concern is that if we simply apply the tensor product of cosine spaces in a higher-dimensional case, as we have done in the last section, we will face the curse of dimensionality.

Here, we aim to address the above two issues by constructing quadrature rules for the integration of

(2.2)

In particular, the approximant takes the specific form,

(2.3)

where is some predetermined quadrature points set.

For the half-period cosine space that we adopted in the previous section, Dick et. al. showed in [7] that a combination of rank-1 lattice rules and tent transformations converges well in the context of quasi-Monte Carlo methods. We will now introduce the quadrature points proposed in [7] for the half-period cosine space, which we should adapt for the approximation of Equation (2.3).

A lattice point set with points and generating vector is given by

The tent transformation, is defined as

and the higher-dimensional version is obtained by applying the function component-wise. The tent-transformed lattice point set in [7] is given by

However, since we have to apply the quadrature points on a general box , instead of just on , we have to transform the lattice rules:

We wish to quantify the integration error of applying Equation (2.3) to approximate the expectation in the form of equation (2.2) for functions in the half-period cosine space . In addition to the above condition, we also enforce some convergence requirements on the cosine transform of the measure .

Theorem 2.2.

Consider any function and any probability measure such that

for some real number and vector . Furthermore, we assume that , namely, the cosine transform of the measure decays at least algebraically and quicker than the cosine coefficients of . The error for the numerical integration using the tent-transformed lattice rule on is then bounded by

for some constant , where is the dual lattice.

Proof.

The general flow of this proof follows the proof of Theorem 2 in [7]. Let and be its half-period cosine expansion.

For any , we have

and hence

The reproducing kernel can be rewritten as

Therefore, we obtain

The sum is a character sum over the group , which is equal to one if is a multiple of and zero otherwise. From this, we get

(2.4)

Based on this formula and an application of the Cauchy-Schwarz inequality, we obtain

(2.5)

Making use of the smoothness assumption on the probability measure, we find

(2.6)

In order to control the error, we need to control the sum in Equation (2.6) for all . We have three different cases to consider.

Case 1:

In this case,

in which denotes the Riemann zeta function.

Case 2:

In this case,

The above inequality simply follows from the definition and the orderings , and . Note that we use the convention here, so the inequality also holds for .

Case 3:

Finally, in this case,

The derivation of Case 3 is similar to Case 2. The only difference is that we have the following orderings: for and .

Finally, let , which is independent of . Then,

In this proof, we break down the sum in Equation (2.5) and derive a bound for each dual lattice point along each direction, namely, the terms in (2.6). By considering the three possible cases for , positive, negative and zero, we provide a bound in each case. It is necessary to separate the three cases as we have to identify the section where both and are smaller than when is moving along the real line and apply a separated bound on these sections.

Taking the maximum out of the three cases and putting back into the sum in Equation (2.3) finishes the proof. ∎

Remark 2.3.

The integrals of the form, can be seen as special cases of Equation (2.2

) where the probability measure is an uniform distribution over

. In this case,

Therefore, we may simplify the square of the term in Equation (2.5), as follows