# Efficient implementations of the Multivariate Decomposition Method for approximating infinite-variate integrals

In this paper we focus on efficient implementations of the Multivariate Decomposition Method (MDM) for approximating integrals of ∞-variate functions. Such ∞-variate integrals occur for example as expectations in uncertainty quantification. Starting with the anchored decomposition f = ∑_u⊂N f_u, where the sum is over all finite subsets of N and each f_u depends only on the variables x_j with j∈u, our MDM algorithm approximates the integral of f by first truncating the sum to some `active set' and then approximating the integral of the remaining functions f_u term-by-term using Smolyak or (randomized) quasi-Monte Carlo (QMC) quadratures. The anchored decomposition allows us to compute f_u explicitly by function evaluations of f. Given the specification of the active set and theoretically derived parameters of the quadrature rules, we exploit structures in both the formula for computing f_u and the quadrature rules to develop computationally efficient strategies to implement the MDM in various scenarios. In particular, we avoid repeated function evaluations at the same point. We provide numerical results for a test function to demonstrate the effectiveness of the algorithm.

There are no comments yet.

## Authors

• 5 publications
• 9 publications
• 9 publications
• 1 publication
• ### A probabilistic framework for approximating functions in active subspaces

This paper develops a comprehensive probabilistic setup to compute appro...
09/18/2018 ∙ by Mario Teixeira Parente, et al. ∙ 0

• ### A note on concatenation of quasi-Monte Carlo and plain Monte Carlo rules in high dimensions

In this short note, we study a concatenation of quasi-Monte Carlo and pl...
06/23/2021 ∙ by Takashi Goda, et al. ∙ 0

• ### Statistical Analysis from the Fourier Integral Theorem

Taking the Fourier integral theorem as our starting point, in this paper...
06/11/2021 ∙ by Nhat Ho, et al. ∙ 0

• ### Multilevel quasi-Monte Carlo for random elliptic eigenvalue problems II: Efficient algorithms and numerical results

Stochastic PDE eigenvalue problems often arise in the field of uncertain...
03/05/2021 ∙ by Alexander D. Gilbert, et al. ∙ 0

• ### Small Superposition Dimension and Active Set Construction for Multivariate Integration Under Modest Error Demand

Constructing active sets is a key part of the Multivariate Decomposition...
03/02/2017 ∙ by Alexander D. Gilbert, et al. ∙ 0

• ### Walsh functions, scrambled (0,m,s)-nets, and negative covariance: applying symbolic computation to quasi-Monte Carlo integration

We investigate base b Walsh functions for which the variance of the inte...
06/11/2020 ∙ by Jaspar Wiart, et al. ∙ 0

• ### Functional Tucker approximation using Chebyshev interpolation

This work is concerned with approximating a trivariate function defined ...
07/31/2020 ∙ by Sergey Dolgov, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The Multivariate Decomposition Method (MDM) is an algorithm for approximating the integral of an -variate function defined on some domain with , and this paper presents the first results on the implementation of the MDM. The general idea of the MDM, see [5, 8, 14, 17, 18] (as well as [10, 13] under the name of Changing Dimension Algorithm), goes as follows. Assume that admits a decomposition

 f(x)=∑u⊂Nfu(xu), (1)

where the sum is taken over all finite subsets of

 N:={1,2,3,…},

and where each function depends only on the variables in . With

a given probability density function on

and , we define the integral of by

 I(f):=∑u⊂NIu(fu),withIu(fu):=∫D|u|fu(xu)ρu(xu)dxu, (2)

and let . The MDM algorithm for approximating the integral is

 A(f):=∑u∈UAu(fu), (3)

where is the “active set”; for , each is a -dimensional quadrature rule, and .

The error of the MDM algorithm satisfies the trivial bound

 (4)

Given , the strategy is to first choose an active set such that the first sum in (4) is at most , and then specify the quadrature rules such that the second sum in (4) is also at most , giving a total error of at most . It is known that the sets have cardinalities increasing very slowly with decreasing ,

 maxu∈U(ε)|u|=O(ln(1/ε)ln(ln(1/ε)))as\ ε→0,

see, e.g., [5, 13].

We would need to impose additional conditions on the class of functions to ensure that the sum (1) is absolutely convergent, the integral (2) is well defined, and the quadrature rules in (3) converge appropriately to the corresponding integrals. The precise details will depend on the mathematical setting within which we choose to analyze the problem. We will outline some variants below, but this is not the focus of the present paper.

The main focus of this paper is on the implementation of the MDM algorithm, which involves the two following steps. The first step is to construct the active set given an abstract definition of from the theory. Then in the second step, supposing we are given an active set and the choice of quadrature rules , we develop computationally efficient strategies to evaluate (3) in certain scenarios by exploiting specific structures in the MDM algorithm and the quadrature rules of choice. Specifically,

• we assume a product and order dependent (POD) structure in the definition of the active set ;

• we utilize the anchored decomposition of functions; and

• we consider quasi-Monte Carlo methods and Smolyak’s methods as two alternatives for the quadrature rules .

In Section 2 we explain the structure of our active set and provide an efficient strategy to construct it. Once the active set has been constructed, we need to evaluate the quadrature rules for each ; this is formulated in Section 3. In this paper we use the anchored decomposition [11] of so that the terms can be computed explicitly via

 fu(xu)=∑v⊆u(−1)|u|−|v|f(xv;0), (5)

where indicates that we evaluate the function at with components for and for . Throughout this paper, by a “naive” implementation of the MDM algorithm, we mean an implementation which computes the sum in (3) term by term, with each evaluated using (5).

We consider only linear algorithms as the quadrature rules and our MDM algorithm can therefore be expressed as

 A(f)=∑u∈U∑v⊆u(−1)|u|−|v|Au(f(⋅v;0)). (6)

Notice inside the double sum in (6) that we would be applying a -dimensional quadrature rule to a function which depends only on a subset of the variables. Moreover, the same evaluations of could be repeated for different combinations of and , while in practice the cost of evaluating could be quite expensive. We will exploit structures in the quadrature rules to save on repeated evaluations in (6).

In Section 3.1 we first consider Smolyak quadrature to be used as the quadrature rules (see, e.g., [4, 16]). Then in Section 3.2 we consider instead an extensible quasi-Monte Carlo (QMC) sequence to be used for the quadrature rules (see, e.g., [2, 3]). In both sections we explain how to regroup the terms by making use of the recursive structure and how to store some intermediate calculations for the specific quadrature rules to evaluate (6) efficiently.

Section 4 considers two different approaches to implement the Smolyak quadratures: the direct method and the combination technique. In Section 5 we consider a randomized

quasi-Monte Carlo sequence for the quadrature rules. This enables us to obtain an unbiased result and a practical estimate of the quadrature error for the MDM algorithm.

Each variant of our MDM algorithm involves three stages, as outlined in the pseudocodes; a summary is given as follows:

Pseudocodes 1 + 2A + 3A Smolyak MDM – direct implementation Smolyak MDM – combination technique Extensible QMC MDM Extensible randomized QMC MDM

In Section 6 we derive a computable expression for estimating an infinite series that may appear in the definition of the active set, which is another novel and significant contribution of this paper. Finally in Section 7 we combine all ingredients and follow the mathematical setting of [8] to construct the active set and choose the quadrature rules. We then apply the MDM algorithm to an example integrand that mimics the characteristics of the integrands arising from some parametrized PDE problems (see, e.g., [9]).

## 2 Constructing the active set

Letting be a measure of the “significance” of the subset , we assume that the mathematical analysis yields the definition of an active set of the general form

 U:={u⊂N:w(u)>T}, (7)

where is a “threshold” parameter that depends on the overall error demand and possibly on all of . For example, can be related to the weight parameters from a weighted function space setting (as in [17, 18, 14, 5]), or it can be related to the bounds on the norm of (as in [8]).

In this section we will treat and as input parameters (ignoring the mathematical details of where they come from), and focus on the efficient implementation of the active set given these parameters. Then in Section 6 we will consider a special form of (arising from analysis) which requires numerical estimation of an infinite series.

We assume so that we always have . Furthermore, we assume specifically for that takes the product and order dependent (POD) form (a structure that first appeared in [9]):

 w(u):=Ω|u|∏j∈uωj, (8)

where is a non-increasing sequence of nonnegative real numbers controlling the “product aspect”, and is a second sequence of nonnegative real numbers controlling the “order dependent aspect”, with the restriction on that its growth is controlled by , i.e., for all . This assumption is satisfied in all practical cases that we are aware of. Further, in the theoretical framework for the MDM (see, e.g., [8]), a sufficient condition for the infinite-dimensional integral to be well-defined is for the parameters to be summable, , which will not hold unless the condition holds (at least asymptotically in ).

With the active set defined by (7) and (8), we make a couple of obvious remarks:

1. If then for all sets satisfying .

2. If then for all sets satisfying .

We identify any finite set

with a vector containing the elements of

in increasing order, i.e., if then

 u:=(u1,u2,…,uℓ),u1

Then, due to our assumed POD structure in (8), we note that

1. if and for all .

2. for all .

3. For any , a subset of need not be included in .

Note that if the opposite of Item 5 were true, i.e., every subset of also belongs to , then the set is said to be “downward closed” in some papers; we do not impose this condition.

Combining the above, we deduce the following simple lemma.

###### Lemma 1.

Assume that the active set is defined by (7) and (8).

• (“Superposition dimension”) Let be the largest possible value of for which , i.e., . Then for all we have .

• (“Truncation dimension for sets of order ”) For any , let be the largest possible value of for which , That is, . Then for all with , we have ; and consequently, for all .

• (“Truncation dimension”) Let be the largest possible value of for which , i.e., . Then .

###### Proof.

For the first point, suppose on the contrary that . Then letting we have , which indicates that , contradicting the definition of .

To demonstrate the second point, suppose on the contrary that with . Then we have with , which indicates that , but this contradicts the definition of . The bound on then follows easily.

The third point is straightforward. ∎

We construct the active set as outlined in Pseudocode 1. The algorithm adds the qualifying sets to the collection in the order of increasing cardinality. For each , starting from the set , the algorithm incrementally generates and checks sets to be added to the collection. The algorithm terminates when it reaches a value of for which , i.e., .

The assumptions on the structure of and properties 1–5 above ensure that this stopping criteria is valid, and hence that Pseudocode 1 does indeed construct the active set (7). In particular, property 3 implies for all sets with , and then Property 4 implies for all sets with . Thus, if then no set with cardinality or higher is in .

We recommend storing the active set as an array of hash tables, with one table for each cardinality, since in the next section we will have to iterate over all subsets and be able to update a table stored with each such .

###### Remark 1.

The paper [5] provides an efficient algorithm to construct the optimal active set , i.e., an active set that has the smallest cardinality among all active sets with the same error demand . The construction principle is based on sorting so is quite different to Pseudocode 1, and it works only for parameters of product form. Once the active set is constructed, the remaining steps for implementing the MDM algorithm will be the same as we discuss below.

## 3 Formulating the MDM algorithm

In this section we outline how to formulate the MDM algorithm (6) in a way that is specific to the quadrature rules used, so that the implementation can be as efficient as possible. We do this by exploiting the structure in the anchored decomposition (5), and also in the quadrature rules, which will be Smolyak’s methods (also known as sparse grid methods) and quasi-Monte Carlo rules.

In each case we treat the parameters of the quadrature rules (i.e., the number of quadrature points or levels) as input, and focus on the efficient implementation of the MDM given these parameters. Specific choices of parameters for a test integrand following the theoretical analysis in [8] are given in Section 7.

Recall from (6) that the MDM algorithm using the anchored decomposition is given by

 A(f)=∑u∈U∑v⊆u(−1)|u|−|v|Au(f(⋅v;0)).

Clearly there will be subsets that will occur many times over, so implementing the MDM in this way could be severely inefficient, because it would evaluate the same functions at the same quadrature points over and over again. The goal of this section is to detail how to implement the quadrature approximations in such a way that each function is evaluated at each quadrature point once only.

The first step is to introduce the extended active set:

 Uext:={v⊂N:v⊆u for u∈U},

that is, it includes all subsets of the sets in the active set. Then we can swap the sums above to give

 A(f) =∑v∈Uext∑u∈Uu⊇v(−1)|u|−|v|Au(f(⋅v;0)) =c∅f(0)+∑∅≠v∈Uext∑u∈Uu⊇v(−1)|u|−|v|Au(f(⋅v;0)), (9)

where we separated out the terms, with

 c∅:=∑u∈U(−1)|u|.

After constructing the active set , we go through it again to construct the extended active set , and at the same time store information regarding the superset structure of each element in . We would like to store just enough details so that for each we can compute the approximation without the need to access the supersets of . Specific details on how this is done will depend on the quadrature rule used.

### 3.1 Quadrature rules based on Smolyak’s method

For a nonempty set and integer , Smolyak’s method (see, e.g., [4, 16, 19]) applied to a function of the variables takes the form

 Qu,m(gu):=∑iu∈N|u||iu|≤|u|+m−1⨂j∈u(Uij−Uij−1)(gu), (10)

where , , and is a sequence of one-dimensional quadrature rules, not necessarily nested, with denoting the zero algorithm. Furthermore we assume that constant functions are integrated exactly, so that for .

For a nonempty subset , suppose now that the function depends only on the variables . Then we have

 Qu,m(gv) =∑iu∈N|u||iu|≤|u|+m−1(⨂j∈v(Uij−Uij−1)(gv))(⨂j∈u∖v(Uij−Uij−1)(1)) =∑iu∈N|u||iu|≤|u|+m−1,iu∖v=1⨂j∈v(Uij−Uij−1)(gv) (11)

In the second equality above we used the assumption that the one-dimensional quadrature rules integrate the constant functions exactly and thus is  if and is  otherwise. The above derivation (3.1) indicates how a Smolyak quadrature rule is projected down when it is applied to a lower dimensional function. This property is important in our efficient evaluation of (3).

In (3) we take

 Au≡Qu,mu,

where the level determines the number of quadrature points used by . The exact relationship between and will depend on the choice of the one-dimensional quadrature rules .

Here we treat the levels , hence also the number of points , as input parameters to our MDM algorithm. Then we define the maximum level to occur as

 mmax:=max{mu:∅≠u∈U}.

For Smolyak grids based on one-dimensional rules that each use points (e.g., trapezoidal rules) the value of is roughly the logarithm of (see, e.g., [4]). Hence in practice we observe that is relatively small, e.g., .

Using (3.1) we can rewrite (3) as follows (note the change from to ):

 AS(f) =c∅f(0)+∑∅≠v∈Uext∑u∈Uu⊇v(−1)|u|−|v|Qu,mu(f(⋅v;0)) =c∅f(0)+∑∅≠v∈Uext∑u∈Uu⊇v(−1)|u|−|v|Qv,mu(f(⋅v;0)) =c∅f(0)+∑∅≠v∈Uextmmax∑m=1c(v,m)≠0c(v,m)Qv,m(f(⋅v;0)), (12)

where for and we define

 c(v,m):=∑u∈U,u⊇vmu=m(−1)|u|−|v|. (13)

The values of can be computed and stored while we construct the extended active set as follows. We work through the sets in the active set in order of increasing cardinality. For each nonempty set with required level , we generate all nonempty subsets , add the missing subsets to , and update as we go. This procedure is given in Pseudocode 2A.

This formulation (3.1)–(13) allows us to compute the for different supersets with the same value of only once. If the Smolyak MDM algorithm is implemented in this way then there is no need to access the superset structure. Obviously, if then we do not perform the quadrature approximation.

Note that in practice calculating the number of Smolyak levels (or the number of QMC points in the next subsection) normally requires knowledge of the entire active set , see, e.g., [8, Section 4.3], hence we compute them when constructing .

Note also that we do not need a separate data structure for : we can simply extend to since Step 9 in Pseudocode 2A only adds subsets with lower cardinalities and would not interfere with Step 3 since we iterate in increasing cardinality. As we explained in the previous section, we store the active set , and by extension the extended active set , as an array of hash tables to easily retrieve the table for each .

A direct implementation of the MDM algorithm with Smolyak quadratures is given in Pseudocode 3A. The different formulas (20)–(21) or (22)–(23) for implementing the Smolyak quadrature, which depend on whether we have a non-nested or nested rule, will be discussed in Section 4.

### 3.2 Quadrature rules based on quasi-Monte Carlo methods

Here we assume for simplicity that and . A -dimensional quasi-Monte Carlo (QMC) rule with points , , approximates the integral of a function by the equal-weight average

 ∫[0,1]dg(x)dx≈1nn−1∑i=0g(t(i)). (14)

For more details on QMC methods we refer to [12, 15] and [3].

In (3) each could be a different -dimensional QMC rule with points, but in that case we would not be able to reuse any function evaluation. Instead, we consider here an “extensible quasi-Monte Carlo sequence”. By “extensible” we mean that we can take just the initial dimensions of the initial points in the sequence. By “quasi-Monte Carlo” we mean that a quadrature rule based on the first points of this sequence has equal quadrature weights . We choose to use a QMC rule with dimensions instead of , since can be really large (e.g., ) while is rather small (e.g., ), and QMC rules with fewer dimensions are of better quality.

Then for any nonempty set and nonempty subset we have

 Au(f(⋅v;0))=1nunu−1∑i=0f(t(i)u|v→v;0), (15)

where, loosely speaking, indicates that we map the quadrature point to the variables and then to , which is not the same as mapping directly to . More explicitly, recalling that has ordered elements, denotes that we take the first -dimensions of and apply them to the variables in , then retain only those components in . The function is then evaluated by anchoring all other components outside of to zero. Thus the algorithm (3) in this case is given by

 AQ(f) =c∅f(0)+∑∅≠v∈Uext∑u∈Uu⊇v(−1)|u|−|v|(1nunu−1∑i=0f(t(i)u|v→v;0)). (16)

For example, take and . We get since the set originates from the position of its superset . We assign the quadrature point to the variables . Then the point is assigned to the variables , and hence we evaluate .

Note that the same set can originate from the position of different supersets : for example, , , and many others. We can make use of this repetition to save on computational cost.

Let denote the set of all different positions that a nonempty set can originate from for all its supersets in the active set:

 M(v):={w⊆{1,…,σ∗}:w≡u|v % for some u∈U with u⊇v}.

For simplicity and for convenience, we assume further that , with (e.g., with ). This allows us to rewrite each QMC approximation as a sum of blocks of points (recall that the QMC points are extensible)

 12mu2mu−1∑i=0f(t(i)u|v→v;0)=12mumu∑m=02m−1∑i=⌊2m−1⌋f(t(i)u|v→v;0),

where the floor function is used specifically to take care of the case. Substituting this into (16) and introducing a sum over , we have

 AQ(f)=c∅f(0)+∑∅≠v∈Uext∑w∈M(v)∑u∈Uu⊇vu|v≡w(−1)|u|−|v|12mumu∑m=02m−1∑i=⌊2m−1⌋f(t(i)w→v;0),

where in the third sum we have added the restriction that is equivalent to the position . Collecting the sums

 Sv,w,m(f):=2m−1∑i=⌊2m−1⌋f(t(i)w→v;0), (17)

we can then rewrite the QMC MDM (16) as

 AQ(f)=c∅f(0)+∑∅≠v∈Uext∑w∈M(v)mmax∑m=0c(v,w,m)≠0c(v,w,m)Sv,w,m(f)2mmax, (18)

where for a nonempty set , a position , and we define

 c(v,w,m):=∑u∈U,u⊇vu|v≡w,mu≥m(−1)|u|−|v|2mmax−mu. (19)

Note that we have chosen to multiply and divide by to ensure that each is integer valued.

We can compute and store a list of positions and the values when we construct the extended active set by extending the active set , in a similar way to the Smolyak case in the previous subsection. This is presented in Pseudocode 2B. The new algorithm is more complicated due to the need to store the positions .

The MDM implementation using the formulation (17)–(19) does not require access to any subsets or supersets. For each nonempty and each position and for the different , with , the sums are over disjoint sets of QMC points. In this way we will only evaluate each function at each quadrature point once. An implementation of the MDM algorithm with QMC quadratures is given in Pseudocode 3B.

## 4 Two implementations of Smolyak MDM

Here we compare two approaches to implement Smolyak quadrature in the context of MDM: the direct implementation and the combination technique.

### 4.1 Direct Smolyak implementation

From a practical point of view, it is more useful to write Smolyak’s method as an explicit weighted quadrature rule as opposed to the tensor product form (

10), see, e.g., [4]. We summarize this formulation below.

For each one-dimensional rule , let denote the number of quadrature points, the quadrature weights, and the quadrature nodes. Here for simplicity of notation we present the formula for a -dimensional rule, with , which would need to be mapped to the set appropriately. To this end we write . The formula depends on whether the quadrature rules are nested, i.e., whether includes all the quadrature points from .

#### Non-nested case

For non-nested one-dimensional rules, Smolyak’s method can be written explicitly as

 Qd,m(g)=∑i∈Ndd≤|i|≤d+m−1n(i1)−1∑k1=0⋯n(id)−1∑kd=0wi,kg(ti,k), (20)

where the quadrature point has coordinates for and

 wi,k=∑p∈{0,1}dd≤|i+p|≤d+m−1d∏j=1((−1)pjwij,kj). (21)

#### Nested case

When the are nested, we assume that the quadrature points and weights are ordered such that at level the new points occur at the end of the point set, from index onwards. That is, for all we have for . Then, to ensure that the function is only evaluated at each node once, (10) can be rewritten as

 Qd,m(g)=∑i∈Ndd≤|i|≤d+m−1n(i1)−1∑k1=n(i1−1)⋯n(id)−1∑kd=n(id−1)wi,kg(ti,k), (22)

with weights

 wi,k=∑q∈Nd,q≥id≤|q|≤d+m−1d∏j=1(wqj,kj−wqj−1,kj), (23)

where we set for all and when . In particular, when in (23) the weight that is subtracted is 0, that is, , since in (22) .

### 4.2 Smolyak quadrature via the combination technique

The combination technique (it combines different straightforward tensor product rules, hence the name) provides an alternative formulation to (10) as follows, see, e.g., [6, 19],

 Qd,m(g) =∑i∈Ndmax(m,d)≤|i|≤d+m−1(−1)d+m−1−|i|(d−1|i|−m)(d⨂j=1U