# Exploration of a Cosine Expansion Lattice Scheme

In this article, we combine a lattice sequence from Quasi-Monte Carlo rules with the philosophy of the Fourier-cosine method to design an approximation scheme for expectation computation. We study the error of this scheme and compare this scheme with our previous work on wavelets. Also, some numerical experiments are performed.

## Authors

• 1 publication
• 11 publications
• ### A Tool for Custom Construction of QMC and RQMC Point Sets

We present LatNet Builder, a software tool to find good parameters for l...
12/18/2020 ∙ by Pierre L'Ecuyer, et al. ∙ 0

• ### A note on concatenation of quasi-Monte Carlo and plain Monte Carlo rules in high dimensions

In this short note, we study a concatenation of quasi-Monte Carlo and pl...
06/23/2021 ∙ by Takashi Goda, et al. ∙ 0

• ### Multilevel quasi-Monte Carlo for random elliptic eigenvalue problems I: Regularity and error analysis

Random eigenvalue problems are useful models for quantifying the uncerta...
10/02/2020 ∙ by Alexander D. Gilbert, et al. ∙ 0

• ### A Fully Quantization-based Scheme for FBSDEs

We propose a quantization-based numerical scheme for a family of decoupl...
05/07/2021 ∙ by Giorgia Callegaro, et al. ∙ 0

• ### Generalized bounce back boundary condition for the nine velocities two-dimensional lattice Boltzmann scheme

In a previous work, we have proposed a method for the analysis of the bo...
12/23/2019 ∙ by François Dubois, et al. ∙ 0

• ### Lattice Studies of Gerrymandering Strategies

We propose three novel gerrymandering algorithms which incorporate the s...
08/08/2018 ∙ by Kyle Gatesman, et al. ∙ 0

• ### Explicit error bounds for lattice Edgeworth expansions

Motivated, roughly, by comparing the mean and median of an IID sum of bo...
10/24/2017 ∙ by J. P. Buhler, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this work, we derive a numerical integration formula for a function

with respect to a probability measure

. Namely, we aim to approximate the following quantity:

 E[f(Y)]=∫Rsf(y)μ(dy). (1.1)

The inspiration of this work comes from two promising methods in numerical integration, the COS method and lattice rules.

The COS method is a numerical integration method developed in the context of option pricing, see [10]

. Within this framework, the fair value of a financial option can be expressed as the expectation of the discounted payoff function. Using the fact that the payoff function can be expressed as a cosine series, an approximation of the integration in terms of the Fourier transform of the risk neutral measure and the cosine coefficient of the payoff function were presented. This method is efficient in terms of calculation with only simple addition operations. It also makes use of the characteristic function of a probability measure, which is available more often than the density function. Ever since, there has been numerous extensions of the COS method, including pricing Bermudan options

[11], finding ruin probability [4], solving backward stochastic differential equations [22] and computation of valuation adjustment [2].

There are also several applications for which it is necessary to extend the COS method beyond the one-dimensional situation, for example, in [21]

. However, the tensor extension used in previous studies suffers from the curse of dimensionality, with the number of summation terms increasing exponentially when the number of dimensions increases. Therefore, further input is required for such extensions to be feasible in practice.

Thus, we took the insight from Quasi-Monte Carlo (QMC) rules, which approximate integrals of the form

 ∫[0,1]sf(y)dy

 1NN−1∑n=0f(yn).

Readers are referred to [8], [18] and references therein for further details.

In particular, we are interested in the rank 1 lattice rule construction method for quadrature points, where for a given positive number under dimension

, one chooses a vector

called the generating vector and generates a points set:

 ({ngN})0≤n

where the notation denotes the fractional part of the real number in each dimension. The quadrature points are the result of applying some fixed function on this points set.

There have been numerous research papers supporting the implementation of lattice rule QMC, in identifying a suitable generating vector [6, 16, 24, 23], in efficient algorithms to generate such vectors [19, 20], and in extensible lattice rules [5, 9, 13, 14]. In [7], the authors derived an error bound for QMC with tent transformed lattice rules for functions within the half-period cosine space. Noting the connection between the half-period cosine space and the tensor extension of the cosine series, we are inspired to combine the two approaches and transfer the rich results from the lattice literature to Fourier expansion schemes.

This work is organized as follows. In Section 2, we present the two components of the cosine expansion lattice scheme, the half-period cosine expansion and the tent transformed lattice. We also present the full scheme in this section. In Section 3, we give further details regarding our current results by providing an alternative formation of its error bound and connecting the scheme to periodic wavelets. Numerical experiments are presented in Section 4 and we conclude our findings in Section 5.

Before we begin, we mention some conventions used in this work. We assume all the integrals in the computation to be finite, therefore Fubini’s theorem can be applied and we exchange the order of integration without notice. The operations and act component-wise when used on vectors.

• The natural number set ;

• The positive integer set , similarly for ;

• The truncated integer set, for we write ;

• The indicator function

 1D(y)={1, if y∈D;0, otherwise;
• The ceil function , .

## 2 Cosine Expansion Lattice Scheme

In this section, we introduce the cosine expansion lattice scheme, whose construction consists of two parts: a projection of the original integrand on a reproducing kernel space and a numerical integration technique based on a tent-transformed lattice rule. We will briefly describe the intuition behind our derivation.

### 2.1 The half-period cosine space

In a similar framework as the COS method from [10], we would like to define a cosine based periodic expansion of function in . This allows us to connect the expectation problem (1.1) to a Fourier transform.

The common practice to transform Equation (1.1) into a finite problem for computational purposes is restricting the domain of integration to a predefined box . This step is justified as long as contains the majority of the mass of the measure. In fact, this is equivalent to replacing the original integrand by , or setting the function values outside to zero.

The COS method uses this concept to its advantage by replacing an originally non-periodic one-dimensional function by a cosine series projection that coincides with on the domain , but is periodic throughout the whole real line.

To be precise, the integrand is replaced by111The notation denotes the first term of the summation is weighted by one half.

 ¯f(y):=N−1∑k=1′˜fcos(k)cos(kπy−ab−a),

where

 ˜fcos(k):=2b−a∫baf(y)cos(kπy−ab−a)dx.

In this case, the original 1-D expectation is approximated by

 E[f(Y)] ≈∫RN−1∑k=1′˜fcos(k)cos(kπy−ab−a)μ(dy) =N−1∑k=1′˜fcos(k)∫Rcos(kπy−ab−a)μ(dy) =N−1∑k=1′˜fcos(k)R{F(kπb−a)exp(−ıkπab−a)}.

Note that within , is a projection of onto a finite cosine series space and converges to in the norm, when tends to infinity. However, when considering the function on the whole domain , it is a periodic function and it thus deviates from itself. Once again, the error of this transformation is kept under control as the probability mass outside is small and we make use of the Fourier transform , which is typically available, in the approximation.

The main goal of this work is to apply the same idea in a higher-dimensional setting. Thus, we will consider a functional space built on cosine functions and use the series projection of onto such space as our replacement integrand. The space we pick is a modification to the half-period cosine space introduced in Section 2.3 of [7] and their article serves as an inspiration of the current work.

We define an inner product as

 Kcosα,γ,s(D)=∑k∈Ns˜fcos(k)˜gcos(k)r−1α,γ,s(k),

for some real number and vector , where the multi-dimensional cosine coefficients are given by

 ∫[0,1]sf(y×(b−a)+a)2|k|0/2s∏j=1cos(πkjyj)dy,for k=(k1,⋯,ks)∈Ns.

Here we define to be the number of non-zero components in . Note that only the portion of within the predefined domain is used here.

For and , the one-dimensional function is defined as:

 rα,γ(k):={1if k=0;γ|k|−2αif k≠0,

and the multidimensional function is set as,

 rα,γ,s(k):=s∏j=1rα,γj(kj),

for and . The function is introduced to the norm here to assess the decay rate of the cosine coefficients, as it is closely related to the approximation error.

We define the corresponding norm as and denote it by . In particular, we have

The dummy variable is changed from

to here in preparation for future computations. We denote any function such that as .

The half-period cosine space is an example of a reproducing kernel Hilbert space with the corresponding reproducing kernel,

 Kcosα,γ,s(D,x,y):= s∏j=1⎛⎝∑kj∈Zrα,γj(kj)cos(πkjxj−ajbj−aj)eıπkjyj−ajbj−aj⎞⎠ = ∑k∈Zsrα,γ,s(k)(s∏j=1cos(πkjxj−ajbj−aj))eıπk⋅y−ab−a,

and for the one-dimensional version,

 Kcosα,γ(D,x,y):= 1+∞∑k=1rα,γ(k)√2cos(πkx−ab−a)√2cos(πky−ab−a) = 1+∞∑k=1rα,γ(k)cos(πkx−ab−a)(eıπky−ab−a+e−ıπky−ab−a) = ∑k∈Zrα,γ(k)cos(πkx−ab−a)eıπky−ab−a.

For any and , we have the reproducing property,

 f(y)=Kcosα,γ,s(D).

Readers are referred to [1] and [8] for further information on reproducing kernel Hilbert spaces.

In this work, we use an alternative kernel which drops the function. It is defined as

 Kcoss(x,y):= s∏j=1⎛⎝∑kj∈Zcos(πkjxj−ajbj−aj)eıπkjyj−ajbj−aj⎞⎠=∑k∈Zs(s∏j=1cos(πkjxj−ajbj−aj))eıπk⋅y−ab−a.

We suppress the part here to simplify our notation.

Using the reproducing property, we have the following equation:

 f(y)=∑k∈Ns˜fcos(k)(√2)|k|0s∏j=1cos(πkiyi−aibi−ai), (2.1)

for any and . We use the expansion at the right-hand side of (2.1) as the replacement integrand in Equation (1.1), denoted by .

###### Definition 2.1 (Half-period Cosine Expansion).

For any given function and given domain , the half-period cosine expansion, , is defined as

 ¯f(y):=∑k∈Ns˜fcos(k)(√2)|k|0s∏j=1cos(πkiyi−aibi−ai),

where the cosine coefficients are defined as

 ˜fcos(k):=∫[0,1]sf(y×(b−a)+a)2|k|0/2s∏j=1cos(πkjyj)dy.

### 2.2 Lattice rule approximations

There are essentially two drawbacks when extending the COS method to higher dimensions. First, the cosine coefficient may be difficult to calculate, especially in a recurring situation. In our previous work [3], we aimed to remedy this shortcoming by adopting a wavelet basis such that we can approximate Equation (1.1) as a weighted sum of local values, in the form of

with being the wavelet basis functions. We will compare our current scheme to that work in Section 3.2.

The second point of concern is that if we simply apply the tensor product of cosine spaces in a higher-dimensional case, as we have done in the last section, we will face the curse of dimensionality.

Here, we aim to address the above two issues by constructing quadrature rules for the integration of

 ∫Rs¯f(y)μ(dy). (2.2)

In particular, the approximant takes the specific form,

 ∫Rs1NN−1∑n=0f(pn)Kcoss(pn,y)μ(dy), (2.3)

where is some predetermined quadrature points set.

For the half-period cosine space that we adopted in the previous section, Dick et. al. showed in [7] that a combination of rank-1 lattice rules and tent transformations converges well in the context of quasi-Monte Carlo methods. We will now introduce the quadrature points proposed in [7] for the half-period cosine space, which we should adapt for the approximation of Equation (2.3).

A lattice point set with points and generating vector is given by

 P(g,N):={{ngN}:0≤n

The tent transformation, is defined as

 ϕ(x):=1−|2x−1|,

and the higher-dimensional version is obtained by applying the function component-wise. The tent-transformed lattice point set in [7] is given by

However, since we have to apply the quadrature points on a general box , instead of just on , we have to transform the lattice rules:

 Pϕ(g,N,D)=

We wish to quantify the integration error of applying Equation (2.3) to approximate the expectation in the form of equation (2.2) for functions in the half-period cosine space . In addition to the above condition, we also enforce some convergence requirements on the cosine transform of the measure .

###### Theorem 2.2.

Consider any function and any probability measure such that

 (∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy))2≤rβ,ρ,s(k),

for some real number and vector . Furthermore, we assume that , namely, the cosine transform of the measure decays at least algebraically and quicker than the cosine coefficients of . The error for the numerical integration using the tent-transformed lattice rule on is then bounded by

 ϵ2(f,μ,Pϕ(g,N,D)):= ∣∣ ∣∣∫Rs1NN−1∑n=0¯f(pϕ,n)Kcoss(pϕ,n,y)μ(dy)−∫Rs¯f(y)μ(dy)∣∣ ∣∣ ≤ ⎛⎜⎝∑h∈L⊥∖{0}rα,γ,s(h)⎞⎟⎠12(s∏i=1Ci)12||f||Kcosα,γ,s(D)

for some constant , where is the dual lattice.

###### Proof.

The general flow of this proof follows the proof of Theorem 2 in [7]. Let and be its half-period cosine expansion.

For any , we have

 cos(πkϕ(y))=cos(2πky) for all y∈[0,1],

and hence

 ¯f(pϕ,n)= ∑k∈Ns˜fcos(k)(√2)|k|0s∏j=1cos(πkjϕ({ngjN})) = ∑k∈Ns˜fcos(k)(√2)|k|0s∏j=1cos(2πkjngjN) = ∑h∈Zs(√2)−|h|0˜fcos(h)e2πın(h⋅g)/N.

The reproducing kernel can be rewritten as

 Kcoss(pϕ,n,y)= ∑k∈Zs(s∏j=1cos(πkjyj−ajbj−aj))e2πın(k⋅g)/N.

Therefore, we obtain

 ∫Rs1NN−1∑n=0¯f(pϕ,n)Kcoss(pϕ,n,y)μ(dy) = = ∫Rs∑h∈Zs∑k∈Zs(√2)−|h|0˜fcos(h)(s∏j=1cos(πkjyj−ajbj−aj))1NN−1∑n=0e2πın[(h+k)⋅g]/Nμ(dy).

The sum is a character sum over the group , which is equal to one if is a multiple of and zero otherwise. From this, we get

 ∫Rs1NN−1∑n=0¯f(pϕ,n)Kcoss(pϕ,n,y)μ(dy)−∫Rs¯f(y)μ(dy) = (2.4)

Based on this formula and an application of the Cauchy-Schwarz inequality, we obtain

 ∣∣ ∣∣∫Rs1NN−1∑n=0¯f(pϕ,n)Kcoss(pϕ,n,y)μ(dy)−∫Rs¯f(y)μ(dy)∣∣ ∣∣ = ∣∣ ∣∣∑h∈L⊥∖{0}∑k∈Zs(√2)−|h−k|0˜fcos(h−k)∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy)∣∣ ∣∣ ≤ ⎛⎜⎝∑h∈L⊥∖{0}(∑k∈Zs(√2)−|h−k|0˜fcos(h−k)∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy))2⎞⎟⎠12 ≤ ≤ ⎛⎜⎝∑h∈L⊥∖{0}∑k∈Zsrα,γ,s(h−k)(∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy))2⎞⎟⎠12||f||Kcosα,γ,s(D) =: ||f||Kcosα,γ,s(D)⎛⎜⎝∑h∈L⊥∖{0}I(h)⎞⎟⎠12. (2.5)

Making use of the smoothness assumption on the probability measure, we find

 I(h)= ∑k∈Zsrα,γ,s(h−k)(∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy))2 ≤ ∑k∈Zsrα,γ,s(h−k)rβ,ρ,s(k) = ∑k∈Zss∏j=1rα,γj(hj−kj)rβ,ρj(kj) = s∏j=1∞∑kj=−∞rα,γj(hj−kj)rβ,ρj(kj)=:s∏j=1Ij(hj). (2.6)

In order to control the error, we need to control the sum in Equation (2.6) for all . We have three different cases to consider.

##### Case 1: hj=0.

In this case,

 Ij(0)= ∞∑kj=−∞rα,γj(−kj)rβ,ρj(kj)=1+2γjρj∞∑kj=11(kj)2α+2β=1+2γjρjζ(2α+2β),

in which denotes the Riemann zeta function.

##### Case 2: hj>0.

In this case,

 Ij(hj)= ∞∑kj=−∞rα,γj(hj−kj)rβ,ρj(kj) = ∞∑kj=1rα,γj(hj+kj)rβ,ρj(kj)+rα,γj(hj)+hj−1∑kj=1rα,γj(hj−kj)rβ,ρj(kj)+rβ,ρj(hj) +∞∑kj=1rα,γj(kj)rβ,ρj(hj+kj) ≤ γj|hj|−2α∞∑kj=1ρj|kj|−2β+rα,γj(hj)+hj−1∑kj=1γj|hj−kj|−2αρj|kj|−2β+ρjγjrα,γj(hj) +ρjγjrα,γj(hj)∞∑kj=1rα,γj(kj) ≤ ρjζ(2β)rα,γj(hj)+rα,γj(hj)+γj|hj|−2α22αhj−1∑kj=1ρj|kj|−2(β−α)+ρjγjrα,γj(hj) +ρjrα,γj(hj)∞∑kj=1|kj|−2α ≤ ρjζ(2β)rα,γj(hj)+rα,γj(hj)+22αρjζ(2(β−α))rα,γj(hj)+ρjγjrα,γj(hj)+ρjζ(2α)rα,γj(hj).

The above inequality simply follows from the definition and the orderings , and . Note that we use the convention here, so the inequality also holds for .

##### Case 3: hj<0.

Finally, in this case,

 Ij(hj)= ∞∑kj=−∞rα,γj(hj−kj)rβ,ρj(kj) = ∞∑kj=1rα,γj(kj)rβ,ρj(kj−hj)+rβ,ρj(hj)+−hj−1∑kj=1rα,γj(hj+kj)rβ,ρj(kj)+rα,γj(hj) +∞∑kj=1rα,γj(hj−kj)rβ,ρj(kj) ≤ ρjζ(2α)rα,γj(hj)+ρjγjrγj,α(hj) +−hj−1∑kj=1γj|hj+kj|−2αρj|kj|−2β+rα,γj(hj)+ρjζ(2β)rα,γj(hj) ≤ ρj(ζ(2α)+ζ(2β))rα,γj(hj)+ρjγjrγj,α(hj)+rα,γj(hj)+22αrα,γj(hj)ρjζ(2(β−α)).

The derivation of Case 3 is similar to Case 2. The only difference is that we have the following orderings: for and .

Finally, let , which is independent of . Then,

 ∣∣ ∣∣∫Rs∑h∈L⊥∖{0}∑k∈Zs(√2)−|h−k|0˜fcos(h−k)(s∏j=1cos(πkjyj−ajbj−aj))μ(dy)∣∣ ∣∣ ≤ ⎛⎜⎝∑h∈L⊥∖{0}s∏j=1Cjrα,γj(hj)⎞⎟⎠12||f||Kcosα,γ,s(D)=⎛⎜⎝∑h∈L⊥∖{0}rα,γ,s(h)⎞⎟⎠12(s∏j=1Cj)12||f||Kcosα,γ,s(D).

In this proof, we break down the sum in Equation (2.5) and derive a bound for each dual lattice point along each direction, namely, the terms in (2.6). By considering the three possible cases for , positive, negative and zero, we provide a bound in each case. It is necessary to separate the three cases as we have to identify the section where both and are smaller than when is moving along the real line and apply a separated bound on these sections.

Taking the maximum out of the three cases and putting back into the sum in Equation (2.3) finishes the proof. ∎

###### Remark 2.3.

The integrals of the form, can be seen as special cases of Equation (2.2

) where the probability measure is an uniform distribution over

. In this case,

 ∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy)= ∫[0,1]s(s∏j=1cos(πkjyj))dy = {1 if k≡0;0 otherwise.

Therefore, we may simplify the square of the term in Equation (2.5), as follows

 ∑h∈L⊥∖{0}(∑k∈Zs(√2)−|h−k|0˜fcos(h−k)∫Rs(s∏j=1cos(πkjyj−ajbj−aj))μ(dy))2