1 Introduction
The Multivariate Decomposition Method (MDM) is an algorithm for approximating the integral of an variate function defined on some domain with , and this paper presents the first results on the implementation of the MDM. The general idea of the MDM, see [5, 8, 14, 17, 18] (as well as [10, 13] under the name of Changing Dimension Algorithm), goes as follows. Assume that admits a decomposition
(1) 
where the sum is taken over all finite subsets of
and where each function depends only on the variables in . With
a given probability density function on
and , we define the integral of by(2) 
and let . The MDM algorithm for approximating the integral is
(3) 
where is the “active set”; for , each is a dimensional quadrature rule, and .
The error of the MDM algorithm satisfies the trivial bound
(4) 
Given , the strategy is to first choose an active set such that the first sum in (4) is at most , and then specify the quadrature rules such that the second sum in (4) is also at most , giving a total error of at most . It is known that the sets have cardinalities increasing very slowly with decreasing ,
We would need to impose additional conditions on the class of functions to ensure that the sum (1) is absolutely convergent, the integral (2) is well defined, and the quadrature rules in (3) converge appropriately to the corresponding integrals. The precise details will depend on the mathematical setting within which we choose to analyze the problem. We will outline some variants below, but this is not the focus of the present paper.
The main focus of this paper is on the implementation of the MDM algorithm, which involves the two following steps. The first step is to construct the active set given an abstract definition of from the theory. Then in the second step, supposing we are given an active set and the choice of quadrature rules , we develop computationally efficient strategies to evaluate (3) in certain scenarios by exploiting specific structures in the MDM algorithm and the quadrature rules of choice. Specifically,

we assume a product and order dependent (POD) structure in the definition of the active set ;

we utilize the anchored decomposition of functions; and

we consider quasiMonte Carlo methods and Smolyak’s methods as two alternatives for the quadrature rules .
In Section 2 we explain the structure of our active set and provide an efficient strategy to construct it. Once the active set has been constructed, we need to evaluate the quadrature rules for each ; this is formulated in Section 3. In this paper we use the anchored decomposition [11] of so that the terms can be computed explicitly via
(5) 
where indicates that we evaluate the function at with components for and for . Throughout this paper, by a “naive” implementation of the MDM algorithm, we mean an implementation which computes the sum in (3) term by term, with each evaluated using (5).
We consider only linear algorithms as the quadrature rules and our MDM algorithm can therefore be expressed as
(6) 
Notice inside the double sum in (6) that we would be applying a dimensional quadrature rule to a function which depends only on a subset of the variables. Moreover, the same evaluations of could be repeated for different combinations of and , while in practice the cost of evaluating could be quite expensive. We will exploit structures in the quadrature rules to save on repeated evaluations in (6).
In Section 3.1 we first consider Smolyak quadrature to be used as the quadrature rules (see, e.g., [4, 16]). Then in Section 3.2 we consider instead an extensible quasiMonte Carlo (QMC) sequence to be used for the quadrature rules (see, e.g., [2, 3]). In both sections we explain how to regroup the terms by making use of the recursive structure and how to store some intermediate calculations for the specific quadrature rules to evaluate (6) efficiently.
Section 4 considers two different approaches to implement the Smolyak quadratures: the direct method and the combination technique. In Section 5 we consider a randomized
quasiMonte Carlo sequence for the quadrature rules. This enables us to obtain an unbiased result and a practical estimate of the quadrature error for the MDM algorithm.
Each variant of our MDM algorithm involves three stages, as outlined in the pseudocodes; a summary is given as follows:

In Section 6 we derive a computable expression for estimating an infinite series that may appear in the definition of the active set, which is another novel and significant contribution of this paper. Finally in Section 7 we combine all ingredients and follow the mathematical setting of [8] to construct the active set and choose the quadrature rules. We then apply the MDM algorithm to an example integrand that mimics the characteristics of the integrands arising from some parametrized PDE problems (see, e.g., [9]).
2 Constructing the active set
Letting be a measure of the “significance” of the subset , we assume that the mathematical analysis yields the definition of an active set of the general form
(7) 
where is a “threshold” parameter that depends on the overall error demand and possibly on all of . For example, can be related to the weight parameters from a weighted function space setting (as in [17, 18, 14, 5]), or it can be related to the bounds on the norm of (as in [8]).
In this section we will treat and as input parameters (ignoring the mathematical details of where they come from), and focus on the efficient implementation of the active set given these parameters. Then in Section 6 we will consider a special form of (arising from analysis) which requires numerical estimation of an infinite series.
We assume so that we always have . Furthermore, we assume specifically for that takes the product and order dependent (POD) form (a structure that first appeared in [9]):
(8) 
where is a nonincreasing sequence of nonnegative real numbers controlling the “product aspect”, and is a second sequence of nonnegative real numbers controlling the “order dependent aspect”, with the restriction on that its growth is controlled by , i.e., for all . This assumption is satisfied in all practical cases that we are aware of. Further, in the theoretical framework for the MDM (see, e.g., [8]), a sufficient condition for the infinitedimensional integral to be welldefined is for the parameters to be summable, , which will not hold unless the condition holds (at least asymptotically in ).
With the active set defined by (7) and (8), we make a couple of obvious remarks:

If then for all sets satisfying .

If then for all sets satisfying .
We identify any finite set
with a vector containing the elements of
in increasing order, i.e., if thenThen, due to our assumed POD structure in (8), we note that

if and for all .

for all .

For any , a subset of need not be included in .
Note that if the opposite of Item 5 were true, i.e., every subset of also belongs to , then the set is said to be “downward closed” in some papers; we do not impose this condition.
Combining the above, we deduce the following simple lemma.
Lemma 1.
Assume that the active set is defined by (7) and (8).

(“Superposition dimension”) Let be the largest possible value of for which , i.e., . Then for all we have .

(“Truncation dimension for sets of order ”) For any , let be the largest possible value of for which , That is, . Then for all with , we have ; and consequently, for all .

(“Truncation dimension”) Let be the largest possible value of for which , i.e., . Then .
Proof.
For the first point, suppose on the contrary that . Then letting we have , which indicates that , contradicting the definition of .
To demonstrate the second point, suppose on the contrary that with . Then we have with , which indicates that , but this contradicts the definition of . The bound on then follows easily.
The third point is straightforward. ∎
We construct the active set as outlined in Pseudocode 1. The algorithm adds the qualifying sets to the collection in the order of increasing cardinality. For each , starting from the set , the algorithm incrementally generates and checks sets to be added to the collection. The algorithm terminates when it reaches a value of for which , i.e., .
The assumptions on the structure of and properties 1–5 above ensure that this stopping criteria is valid, and hence that Pseudocode 1 does indeed construct the active set (7). In particular, property 3 implies for all sets with , and then Property 4 implies for all sets with . Thus, if then no set with cardinality or higher is in .
We recommend storing the active set as an array of hash tables, with one table for each cardinality, since in the next section we will have to iterate over all subsets and be able to update a table stored with each such .
Remark 1.
The paper [5] provides an efficient algorithm to construct the optimal active set , i.e., an active set that has the smallest cardinality among all active sets with the same error demand . The construction principle is based on sorting so is quite different to Pseudocode 1, and it works only for parameters of product form. Once the active set is constructed, the remaining steps for implementing the MDM algorithm will be the same as we discuss below.
3 Formulating the MDM algorithm
In this section we outline how to formulate the MDM algorithm (6) in a way that is specific to the quadrature rules used, so that the implementation can be as efficient as possible. We do this by exploiting the structure in the anchored decomposition (5), and also in the quadrature rules, which will be Smolyak’s methods (also known as sparse grid methods) and quasiMonte Carlo rules.
In each case we treat the parameters of the quadrature rules (i.e., the number of quadrature points or levels) as input, and focus on the efficient implementation of the MDM given these parameters. Specific choices of parameters for a test integrand following the theoretical analysis in [8] are given in Section 7.
Recall from (6) that the MDM algorithm using the anchored decomposition is given by
Clearly there will be subsets that will occur many times over, so implementing the MDM in this way could be severely inefficient, because it would evaluate the same functions at the same quadrature points over and over again. The goal of this section is to detail how to implement the quadrature approximations in such a way that each function is evaluated at each quadrature point once only.
The first step is to introduce the extended active set:
that is, it includes all subsets of the sets in the active set. Then we can swap the sums above to give
(9) 
where we separated out the terms, with
After constructing the active set , we go through it again to construct the extended active set , and at the same time store information regarding the superset structure of each element in . We would like to store just enough details so that for each we can compute the approximation without the need to access the supersets of . Specific details on how this is done will depend on the quadrature rule used.
3.1 Quadrature rules based on Smolyak’s method
For a nonempty set and integer , Smolyak’s method (see, e.g., [4, 16, 19]) applied to a function of the variables takes the form
(10) 
where , , and is a sequence of onedimensional quadrature rules, not necessarily nested, with denoting the zero algorithm. Furthermore we assume that constant functions are integrated exactly, so that for .
For a nonempty subset , suppose now that the function depends only on the variables . Then we have
(11) 
In the second equality above we used the assumption that the onedimensional quadrature rules integrate the constant functions exactly and thus is if and is otherwise. The above derivation (3.1) indicates how a Smolyak quadrature rule is projected down when it is applied to a lower dimensional function. This property is important in our efficient evaluation of (3).
In (3) we take
where the level determines the number of quadrature points used by . The exact relationship between and will depend on the choice of the onedimensional quadrature rules .
Here we treat the levels , hence also the number of points , as input parameters to our MDM algorithm. Then we define the maximum level to occur as
For Smolyak grids based on onedimensional rules that each use points (e.g., trapezoidal rules) the value of is roughly the logarithm of (see, e.g., [4]). Hence in practice we observe that is relatively small, e.g., .
Using (3.1) we can rewrite (3) as follows (note the change from to ):
(12) 
where for and we define
(13) 
The values of can be computed and stored while we construct the extended active set as follows. We work through the sets in the active set in order of increasing cardinality. For each nonempty set with required level , we generate all nonempty subsets , add the missing subsets to , and update as we go. This procedure is given in Pseudocode 2A.
This formulation (3.1)–(13) allows us to compute the for different supersets with the same value of only once. If the Smolyak MDM algorithm is implemented in this way then there is no need to access the superset structure. Obviously, if then we do not perform the quadrature approximation.
Note that in practice calculating the number of Smolyak levels (or the number of QMC points in the next subsection) normally requires knowledge of the entire active set , see, e.g., [8, Section 4.3], hence we compute them when constructing .
Note also that we do not need a separate data structure for : we can simply extend to since Step 9 in Pseudocode 2A only adds subsets with lower cardinalities and would not interfere with Step 3 since we iterate in increasing cardinality. As we explained in the previous section, we store the active set , and by extension the extended active set , as an array of hash tables to easily retrieve the table for each .
3.2 Quadrature rules based on quasiMonte Carlo methods
Here we assume for simplicity that and . A dimensional quasiMonte Carlo (QMC) rule with points , , approximates the integral of a function by the equalweight average
(14) 
For more details on QMC methods we refer to [12, 15] and [3].
In (3) each could be a different dimensional QMC rule with points, but in that case we would not be able to reuse any function evaluation. Instead, we consider here an “extensible quasiMonte Carlo sequence”. By “extensible” we mean that we can take just the initial dimensions of the initial points in the sequence. By “quasiMonte Carlo” we mean that a quadrature rule based on the first points of this sequence has equal quadrature weights . We choose to use a QMC rule with dimensions instead of , since can be really large (e.g., ) while is rather small (e.g., ), and QMC rules with fewer dimensions are of better quality.
Then for any nonempty set and nonempty subset we have
(15) 
where, loosely speaking, indicates that we map the quadrature point to the variables and then to , which is not the same as mapping directly to . More explicitly, recalling that has ordered elements, denotes that we take the first dimensions of and apply them to the variables in , then retain only those components in . The function is then evaluated by anchoring all other components outside of to zero. Thus the algorithm (3) in this case is given by
(16) 
For example, take and . We get since the set originates from the position of its superset . We assign the quadrature point to the variables . Then the point is assigned to the variables , and hence we evaluate .
Note that the same set can originate from the position of different supersets : for example, , , and many others. We can make use of this repetition to save on computational cost.
Let denote the set of all different positions that a nonempty set can originate from for all its supersets in the active set:
For simplicity and for convenience, we assume further that , with (e.g., with ). This allows us to rewrite each QMC approximation as a sum of blocks of points (recall that the QMC points are extensible)
where the floor function is used specifically to take care of the case. Substituting this into (16) and introducing a sum over , we have
where in the third sum we have added the restriction that is equivalent to the position . Collecting the sums
(17) 
we can then rewrite the QMC MDM (16) as
(18) 
where for a nonempty set , a position , and we define
(19) 
Note that we have chosen to multiply and divide by to ensure that each is integer valued.
We can compute and store a list of positions and the values when we construct the extended active set by extending the active set , in a similar way to the Smolyak case in the previous subsection. This is presented in Pseudocode 2B. The new algorithm is more complicated due to the need to store the positions .
The MDM implementation using the formulation (17)–(19) does not require access to any subsets or supersets. For each nonempty and each position and for the different , with , the sums are over disjoint sets of QMC points. In this way we will only evaluate each function at each quadrature point once. An implementation of the MDM algorithm with QMC quadratures is given in Pseudocode 3B.
4 Two implementations of Smolyak MDM
Here we compare two approaches to implement Smolyak quadrature in the context of MDM: the direct implementation and the combination technique.
4.1 Direct Smolyak implementation
From a practical point of view, it is more useful to write Smolyak’s method as an explicit weighted quadrature rule as opposed to the tensor product form (
10), see, e.g., [4]. We summarize this formulation below.For each onedimensional rule , let denote the number of quadrature points, the quadrature weights, and the quadrature nodes. Here for simplicity of notation we present the formula for a dimensional rule, with , which would need to be mapped to the set appropriately. To this end we write . The formula depends on whether the quadrature rules are nested, i.e., whether includes all the quadrature points from .
Nonnested case
For nonnested onedimensional rules, Smolyak’s method can be written explicitly as
(20) 
where the quadrature point has coordinates for and
(21) 
Nested case
When the are nested, we assume that the quadrature points and weights are ordered such that at level the new points occur at the end of the point set, from index onwards. That is, for all we have for . Then, to ensure that the function is only evaluated at each node once, (10) can be rewritten as
(22) 
with weights
(23) 
where we set for all and when . In particular, when in (23) the weight that is subtracted is 0, that is, , since in (22) .
Comments
There are no comments yet.