PDE/PDF-informed adaptive sampling for efficient non-intrusive surrogate modelling

by   Yous van Halder, et al.

A novel refinement measure for non-intrusive surrogate modelling of partial differential equations (PDEs) with uncertain parameters is proposed. Our approach uses an empirical interpolation procedure, where the proposed refinement measure is based on a PDE residual and probability density function of the uncertain parameters, and excludes parts of the PDE solution that are not used to compute the quantity of interest. The PDE residual used in the refinement measure is computed by using all the partial derivatives that enter the PDE separately. The proposed refinement measure is suited for efficient parametric surrogate construction when the underlying PDE is known, even when the parameter space is non-hypercube, and has no restrictions on the type of the discretisation method. Therefore, we are not restricted to conventional discretisation techniques, e.g., finite elements and finite volumes, and the proposed method is shown to be effective when used in combination with recently introduced neural network PDE solvers. We present several numerical examples with increasing complexity that demonstrate accuracy, efficiency and generality of the method.



page 6

page 7

page 9

page 10

page 11

page 12

page 15

page 20


VarNet: Variational Neural Networks for the Solution of Partial Differential Equations

In this paper we propose a new model-based unsupervised learning method,...

Time evolution of the characteristic and probability density function of diffusion processes via neural networks

We investigate the use of physics-informed neural networks-based solutio...

Derivative-informed projected neural network for large-scale Bayesian optimal experimental design

We address the solution of large-scale Bayesian optimal experimental des...

Deep Convolutional Ritz Method: Parametric PDE surrogates without labeled data

Parametric surrogate models for partial differential equations (PDEs) ar...

PINNs for the Solution of the Hyperbolic Buckley-Leverett Problem with a Non-convex Flux Function

The displacement of two immiscible fluids is a common problem in fluid f...

Goal-Oriented Adaptive THB-Spline Schemes for PDE-Based Planar Parameterization

This paper presents a PDE-based planar parameterization framework with s...

Adapting Reduced Models in the Cross-Entropy Method

This paper deals with the estimation of rare event probabilities using i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Uncertainty Quantification (UQ) has become increasingly important for complex engineering applications. Determining and quantifying the influence of parametric and model-form uncertainties is essential for a wide range of applications: from turbulent flow phenomena [1, 2], aerodynamics [3], biology [4, 5] to design optimisation [6, 7, 8].

Numerical methods in UQ are often divided in two groups; intrusive and non-intrusive. We focus on non-intrusive sampling methods, as they do not change the deterministic model and allow for the usage of black-box solvers. One commonly used non-intrusive method is stochastic collocation [9], which uses a black-box to sample the deterministic model several times in stochastic space, and interpolates these samples to construct a surrogate model. Commonly used sets of interpolation nodes are the Gauss nodes [9] and Clenshaw-Curtis nodes [10]. The Gauss nodes possess a high polynomial exactness, but are not nested, which makes them less attractive when the surrogate needs to be refined. On the other hand, Clenshaw-Curtis nodes are nested and are suited for accurate surrogate modelling. However, the number of samples increases exponentially with the number of dimensions of the stochastic space, i.e., the number of uncertainties in the model. This phenomenon is known as the curse of dimensionality

and limits the applicability of stochastic collocation when the black-box is computationally expensive to sample from. As a remedy, alternatives to the tensor based stochastic collocation were introduced, e.g., Leja-node stochastic collocation

[11, 12], empirical interpolation [13, 14, 15, 16], ‘best’ interpolation [17], and Smolyak sparse grids [18, 19]

. These alternatives sample the model adaptively in order to reduce the number of samples. Furthermore, empirical interpolation enhances adaptive sampling placement by incorporating knowledge from the underlying model in terms of the Partial Differential Equation (PDE) residual, which is a measure of how well an approximation satisfies the model. Even though this results in a significant decrease in the number of samples when compared to methods that do not take the model into account, the method is still intractable for uncertainty propagation with a large number of uncertainties, as the empirical interpolation bases sample placement on the entire solution, rather than focusing on the Quantity of Interest (QoI). Furthermore, when interested in the statistical properties of the QoI, e.g., mean and variance, using only the residual as a measure for adaptive sampling placement is not efficient, as it does not utilise the Probability Density Function (PDF), which is used in the calculation of these quantities.

In this work, an empirical interpolation procedure related to [14, 16, 17] is proposed, with the main differences that: probability information is included in the sampling algorithm, fewer restrictions on the type of the PDE are imposed, and the residual is based solely on the QoI and not on the entire PDE solution. A relation between the error in the surrogate and the PDE residual is given, which justifies the residual as a refinement measure for surrogate construction. Furthermore an alternative refinement measure, which incorporates the PDF, is proposed when interested in the statistical properties of the QoI. Using both the residual and PDF as a measure for defining new sampling locations, leads to accurate statistical properties of the QoI, which converge significantly faster than the procedures in [14, 16, 17]. Additionaly, it is shown that the proposed method is suited for efficient surrogate construction on complex topologies, which is a common problem in the case of dependent input uncertainties. Our method does not require a specific type of PDE discretisation, e.g., finite elements or finite volumes, and can therefore be used in combination with new state-of-the-art neural network solvers [20]. A key part of our approach is the use of the PDE residual, which is discussed later in more detail. In order to compute this residual, the black-box solver needs to give not only the solution values, but also derivatives with respect to spatial/temporal coordinates. This introduces a small degree of intrusiveness in the approach, although no changes to the model equations are necessary and our approach is therefore still referred to as a non-intrusive approach. Finally, new methods for solving PDEs [20] can be used in combination with our method without altering the black-box solver. As our approach is still considered to be non-intrusive and uses a combination of the PDE residual and PDF of the uncertain parameters as a refinement measure, we refer to the proposed method as Non-Intrusive PDE/PDF-informed Adaptive Sampling (NIPPAS).

This paper is outlined as follows: section 2 introduces the problem, section 3 introduces the new method and proves that the proposed sampling procedure is suitable for accurate surrogate construction. After introducing the proposed method, section 4 shows the individual steps of the method in more detail. Implementation details are discussed in section 5, and section 6 demonstrates efficiency and accuracy of our method when applied to several test-cases and compares the results with sparse grid interpolation and classical empirical interpolation.

2 Problem Description

Quantifying the effects of parametric uncertainties in computational engineering typically is a three-step process [21]; the input uncertainties are characterised in terms of a Probability Density Function (PDF); the uncertainties are propagated through the model; and the outputs are post-processed, where the Quantity of Interest (QoI) is expressed in terms of its statistical properties. In the present work we focus on the propagation step, and the input distributions are assumed to be given. The underlying model is a PDE, which is assumed to be of the form


which is supplemented with proper initial and boundary conditions. In (1), denotes the number of differential operators in the PDE, a

-dimensional vector containing uncertain parameters with corresponding joint PDF

, a vector containing spatial and/or temporal coordinates, are differential operators, are known functions, a source term, and the exact solution of the PDE. This particular PDE-form assumes that the uncertainties enter the equation via the source term and the functions as parameters in front of differential operators and allows the definition of a non-zero residual in the random space . This PDE-form does not comprise all possible PDEs, but many PDEs with parametric uncertainties, e.g., isotropic diffusion equations, Newtonian Navier-Stokes equations and advection-diffusion equations, may be rewritten in this form:


Because the exact solution of (1) is often not available, a discrete solution vector is computed, e.g., via a finite-difference method or finite-volume method, which satisfies a discretised form of (1) for :


where are the discretised PDE operators, is the computational grid in space and time consisting of grid points, are diagonal matrices with diagonal entries , and is the source term evaluated on the computational grid. The initial and boundary conditions are comprised in the source term . The uncertainties enter the equations via the source term and the matrices , which comprises the function evaluated on the computational grid .

We are interested in the dependence of the QoI on the parameters . The QoI is assumed to be a set of linear combinations of the solution vector , i.e., , where


is a matrix that maps the solution vector to the QoI . This assumption allows for a suitable refinement measure for sampling, which is introduced later. By assuming linearity of we limit the space of possible QoIs, but this limitation is not too severe as many quantities, e.g., integral quantities and mean quantities, can be written in this form.

The goal in this work is twofold: either construct an accurate surrogate for or calculate statistical properties of the QoI with respect to the uncertain parameters . Calculating statistical properties is achieved by constructing a surrogate, which is used in combination with Monte-Carlo sampling to extract the statistical quantities. However if we are only interested in the statistical properties of the QoI, the surrogate does not need to be accurate everywhere, as some areas of the random space contribute little when calculating these statistical properties. Nevertheless, whether we are interested in constructing an accurate surrogate to study the dependency of the QoI on the parameters , or whether we are interested in the statistical properties of the QoI, a surrogate needs to be constructed. We construct a surrogate by applying a PDE-solver to (5) and by sampling values from the unknown . We sample the QoI at locations in the random space . The QoI evaluations are calculated as follows


The PDE-solver is assumed to be a black-box, which means that we supply inputs and receive outputs, without the possibility to observe intermediate steps. After sampling, a surrogate model is constructed by means of polynomial interpolation on the samples . The interpolant is constructed individually for each element of , such that:


where corresponds to the -th element of the vector . The element-wise approximation for the entire QoI vector is denoted as . Polynomial interpolation is used instead of polynomial regression as it allows for more efficient adaptive refinement and has extensive theoretical grounding. When interpolating, placing a new sample ensures that the new surrogate model is accurate in a neighbourhood around the newly added sample and has therefore immediate impact on the surrogate in the sampled area. This ensures improved accuracy near the new sample location, something which is not necessarily the case when using regression. Choosing proper sample locations is crucial for stable and accurate interpolation and this will be the main focus of this paper.

Many UQ methods focus either on the PDF [9] or on the PDE residual [14] for adaptive sample placement. The PDF indicates which values for are likely to happen and are therefore important to sample. The PDE residual however gives an indication where surrogate refinement is needed in order for the surrogate to satisfy the underlying PDE. In this paper we propose a novel strategy, in which both the importance of the PDE and PDF is taken into account in determining the sample locations. Finding a set of sample locations, which resembles the importance of both the underlying PDE and the PDF, is the goal of this paper.

3 Non-Intrusive PDE/PDF-informed Adaptive Sampling

The locations of the interpolation samples determine the accuracy and stability of the surrogate model. We construct a set of interpolation nodes adaptively, by using knowledge from the underlying PDE as refinement measure.

Residual definition

For this purpose we define the PDE residual, which indicates how well an approximate solution satisfies the discretised PDE in the random space . An intuitive definition of the residual can be obtained by first constructing approximations in the random space based on evaluations , and substituting these approximations in the discretised PDE (5):


The quantity indicates how well the approximations satisfy the discretised PDE in the random space .

Relation between residual and surrogate error

The residual is a quantity that indicates the quality of a surrogate by substituting the surrogate into the discretised PDE and by calculating the error. The next theorem states a relation between the residual and the error in the surrogate for linear PDEs, which is used later to define a suitable refinement measure for placing new samples in the random space.

Theorem 1

Assume the following:

  • Bounded 1D random space .

  • Well-posed discretised linear PDE of the form

  • are discretised linear differential operators, i.e., matrices satisfying:

  • are known matrices as functions of and .

  • QoI vector can be written as , where .

Then the following holds:


where is the interpolant in the random space for each element of the vector , based on the samples .


Interpolation in 1D is unique and for convenience we choose the Lagrange basis , where is the -th Lagrange basis polynomial defined as:


The approximations are then given by


which can be rewritten due to the linearity of as


Substitution into (9) gives


Subtracting equation (10), results in


This can be rewritten by using linearity of as


Using well-posedness of the underlying discretised PDE, we can write


which can be multiplied by the matrix to obtain the desired result


Theorem 1 states a relation between the residual and the error in the QoI surrogate. From this relation we can derive the error bound stated in the following corollary.

Corollary 1

Assume the following:

  • The conditions of theorem 1 are satisfied, such that (12) holds.

Then the error satisfies


where is the vector 2-norm or its induced matrix norm.

Corollary 3.1 states an upper bound for the error in the surrogate in terms of the residual and the discretised differential operator, which gives an indication where the surrogate deviates significantly from the exact solution. Equation (21) holds for any induced matrix norm. In this paper we adopt the 2-norm.

Refinement measure for adaptive sampling

The goal is to sample the QoI in the random space such that we can construct an accurate surrogate model for the QoI by means of polynomial interpolation. Theorem 1 shows that a suitable refinement measure for a linear underlying PDE, is given by


If samples are placed such that converges to zero, then the error in the surrogate also converges to zero. Using polynomial interpolation for the approximations in ensures that is close to zero in the neighbourhood of the interpolation samples. The quantity is a function of , but does not depend on the choice of interpolation samples and therefore acts as a scaling function for the residual . This justifies using a greedy approach for placing new samples as follows:


The construction of in is a crucial step in the NIPPAS method and is therefore discussed in section 4 in detail. However, when using as a refinement measure, there are two issues:

  • The term is a-priori unknown and expensive to compute.

  • Equation (22) holds for linear PDEs only.

The inverse of the full discretisation matrix is unknown in general and cannot be used for adaptive sample placement, as it is expensive to compute as a function of . However, as this term only acts as a scaling function, omitting this term may result in a compact refinement measure for the QoI, which is then computed as follows:


Notice that in comparison to , can be computed for both linear and non-linear PDEs. Intuitively, we might expect that the refinement measure produces accurate and stable interpolants in combination with the greedy sample placement, because large errors and/or instabilities in the surrogate would lead to large errors in the residual, which then triggers new sample placement due to the greedy approach. A commonly used quantity that is used to indicate the quality of a set of sample locations is the Lebesgue constant [22]. We note that our proposed greedy approach does not necessarily result in sample locations with an optimal Lebesgue constant, but effectively uses the model information to sample regions of interest, similar to other approaches [11, 14, 17, 23].

To summarise, is our proposed refinement measure, as it is less expensive to compute and more generally applicable when compared to . A numerical comparison for both refinement measures is given in section 6 for a case where can still be computed. Furthermore, the effectiveness of using as a refinement measure for the case of non-linear PDEs is demonstrated numerically in section 6.

Incorporating PDF in refinement measure

Although using the residual as a refinement measure for adaptive sampling may result in stable and accurate surrogate models, the sampling procedure may put too much resources in areas that contribute little to the statistical quantities. Such areas are of little interest when we compute statistical quantities like the mean


These statistical quantities are calculated by weighing the QoI with the PDF and integrating the resulting quantity over the parameter space . Areas of where both the PDF and QoI are low, contribute little to the integral in (25) and therefore an alternative refinement measure is proposed, which is especially suited when one is interested in accurate statistical quantities rather than an accurate surrogate model. The proposed refinement measure is based on the following theorem:

Theorem 2

Assume the following:

  • The conditions of theorem 1 are satisfied, such that (12) holds.

  • and are finite.

Then the following holds:




The proof of the theorem follows the proof of theorem 1 and bounds the integral by multiplying the domain length with a bound for the integrand. The full proof is not shown to avoid repetitiveness.

Theorem 2 states a relation between the residual and the error in the mean of the QoI. The theorem shows that a proper refinement measure again includes the inverse of the full differentiation matrix. As stated before, this value is unknown in general as a function of and is expensive to sample. Therefore the proposed refinement measure for calculating statistical quantities is given by


and new samples are placed using the greedy approach


Refinement measure (28) will not place new samples in parts of the random space where the PDF is zero, but is still able to place samples at “rare events” in the random space, i.e., parts where the PDF is small but where the PDE residual is large.

A comparison of the refinement measures (24) and (28) for convergence of both surrogate construction and statistical quantity calculation are given in section 6.

Overview of method

Residual definitions (24) and (28) are used in the NIPPAS method to adaptively refine a surrogate model . In detail, the NIPPAS method comprises the following steps:

  1. initial sample placement

  2. refinement loop:

    1. compute residual or

    2. check stopping criterion

    3. find or

    4. sample model at

  3. interpolate resulting samples

A schematic representation of the methodology is shown in figure 1. The choice of using either refinement measure (24) or (28) depends on whether the interest is in constructing an accurate surrogate or calculating accurate statistical quantities. The individual steps of the proposed surrogate model construction are discussed in detail in the next section. Before discussing the NIPPAS steps, the connection is shown between the NIPPAS method and general empirical interpolation method.

Figure 1: Schematic representation of the methodology.

The refinement measures were derived from theory, which holds for linear PDEs. We will show in section 6 that the proposed refinement measures also work for a non-linear underlying PDE. Stable polynomial interpolating surrogates show clustering samples at the boundary of the domain [22]. Even though stability is not proven for the proposed method, it follows intuitively from using the greedy approach in combination with the residual. If samples do not cluster at the boundaries, then the approximations used to calculate the residual become unstable. These unstable interpolants produce large errors near the boundary of the domain because of the Runge-phenomenon. This results in large residual values, which leads to sample placement near the boundary of the domain. Lastly, note that the sample locations from the NIPPAS are in general scattered, which complicates constructing an interpolant on these samples. This aspect will be discussed in more detail in section 5

Connection with empirical interpolation

Our NIPPAS procedure has similarities to the classical empirical interpolation procedure [14]. The main difference between general empirical interpolation and NIPPAS is the way in which the residual is constructed. In empirical interpolation the residual is based on the entire spatial/temporal surrogate , rather than focussing on the QoI . Consequently, the residual in empirical interpolation is defined as


which has the advantage that one only needs to approximate a single term, i.e., , instead of constructing several approximations . However, the disadvantage is that the refinement measure adds a degree of intrusiveness as it assumes that the operators are known completely [15], which is not always the case and restricts the applicability.

An advantage of the NIPPAS is that it focuses the sample placement on regions where needs to be refined, rather than where needs to be refined. The NIPPAS therefore focuses on areas in the parameter space that are most relevant for constructing an accurate surrogate for , which results in faster convergence. The disadvantage is that the NIPPAS requires a small alteration of the black-box, which will be discussed in section 4.

4 NIPPAS surrogate construction

The NIPPAS procedure when using refinement measure is shown in this section; the procedure when using is analogous. A schematic overview of the NIPPAS procedure is shown in figure 1.

I. Placement of one random initial sample

The adaptive algorithm starts by performing initial sampling in the random space. Multiple initial sample configurations can be considered. It is advised to start the method with a single Monte-Carlo sample in the random space. This initialisation is used in the remainder of this paper. The initial sample location and corresponding model evaluation is denoted as . More generic, a subscript indicates the number of adaptively placed samples, i.e., and .

II-a. Non-intrusive computation of residual

The refinement loop starts with computing the residual (24).

The residual requires the approximations . These approximations can be easily constructed for a given sample set by applying an interpolation operator (to be discussed in section 5) on the values . However, the black-box solver in general only returns the solution values , and should be altered such that it also returns the values of the individual discretised differential operators . These discretised partial derivatives are used to compute the solution values within the black-box and may be output alongside the solution when sampling the black-box. This requires only a slight alteration of the black-box, without changing the underlying PDE, and therefore keeps the methodology non-intrusive. To summarise, the approximations of the differential operator terms are given by




which are the values that should be returned by the black-box solver alongside the solution values . After approximating each differential operator term in the random space through interpolation, the interpolants are substituted into (24), which returns a function in the variable . Notice that if the black-box solves (5) with negligible round-off and iteration error, then we satisfy


which attains local maxima between the samples, as is shown in figure 1.

II-b. Stopping criterion based on the residual

The refinement loop has to be terminated after a number of iterations.

For this purpose we need a stopping criterion, which reflects the quality of the surrogate model. Equation (21) can be used as a stopping criterion. However, (21) does not hold for non-linear PDEs and computing the required norms would be intractable in general. Ideally, the residual will show a similar convergence as the error in the surrogate. If this is the case, the magnitude of the residual is not only useful to adaptively sample the black-box, but it also serves as a reliable indication for the quality of the surrogate. Therefore, the refinement loop is stopped when the following criterion is met:


where is a specified threshold.

II-c. Find the location where the residual attains maximum

If criterion (34) is not met, the surrogate is refined by placing an extra sample according to (24). In other words, we need to solve the following global optimisation problem in :


The complexity of this global optimisation problem depends on characteristics of the objective function . The residual is in general not smooth and has multiple local maxima. Therefore, in order to solve (35), a particle swarm method from the Global optimisation toolbox in Matlab is used, which is able to solve non-smooth global optimisation problems.

The solution of the global optimisation problem (35) is denoted and is used in the next step to refine the surrogate.

II-d. Sample the model at the new sample location

To refine the surrogate model, we add the new sample to our current sample set :


The new sample set is suited for constructing an improved approximation , see equation (31), which is used in the next iteration of the refinement loop.

III Final surrogate is constructed by interpolation

After the stopping criterion (34) is met, the refinement loop terminates and we end up with a sample set and an evaluation set . In order to construct the final surrogate, we apply the interpolation operator on the final sample set, leading to


where is expected to be an accurate approximation to . If wanted, statistical quantities like (25) can be computed by using this surrogate to compute the desired integrals.

5 Implementation details

In this section some implementation details are discussed.

Surrogate modelling by polynomial interpolation for scalar QoIs

Interpolation for a scalar QoI is discussed here. Interpolation for a vector QoI is achieved by applying the interpolation operator to each element of the QoI vector individually.

Assume we have a sample set and corresponding QoI values . As mentioned in section 4, we denote by the interpolation operator which acts on the set . Polynomial interpolation aims to find a polynomial , such that:


The polynomial serves as a surrogate for . Finding is straightforward in a 1D random space, but interpolation in multi-dimensional random spaces is not unique, and the result is influenced by the choice of interpolation basis. Additionally, the sample locations from our adaptive sampling are scattered, which further complicates the interpolation procedure. In order to make the interpolation basis unique, graded lexicographic ordered Chebyshev polynomials are used, which are known for their stability when using them for interpolation [22]. The interpolant is found by solving a Vandermonde system [9]:


which results in the interpolant


The Vandermonde matrix can become ill-conditioned when increasing the number of samples [24], which makes solving (41) difficult. Nevertheless, other approaches that circumvent solving (41) [25, 26, 27] do not have the flexibility to reuse the inverse computations of previous iterations for solving the larger system at the current iteration. This leads to a significant increase in computational expense as the adaptive sampling placement is done iteratively. Therefore, Vandermonde interpolation is used in the NIPPAS method, and a robust procedure for solving the possibly ill-conditioned system (41) should be used.

Several methods exist for solving such ill-conditioned systems of equations: regularisation [28]

, singular value decomposition

[29], pivoted -factorisation [30], and pseudo-inversion [31]. In practice, computing the pseudo-inverse is often not advised due to its computational cost, which is . However, our adaptive sampling algorithm requires the solution of multiple linear systems with the same Vandermonde matrix at each iteration, which makes the use of a pseudo-inverse beneficial. Nevertheless, computing the full pseudo-inverse at each iteration would be inefficient and therefore Greville’s algorithm [32] is used to update the pseudo-inverse after adding a new row/column to . Consequently, interpolating polynomials can be constructed without solving the full linear system (41) and without computing the full pseudo-inverse each time the interpolation operator is applied. The implementation of Greville’s algorithm is discussed in more detail in the next subsection.

Rank-one update of pseudo-inverse

As mentioned before, when adding a new sample to the sample set, an extra row and column are appended to the existing Vandermonde matrix (41). Therefore, a new pseudo-inverse needs to be determined for this new Vandermonde matrix [32]. Instead of calculating the new entire inverse, Greville’s algorithm is used to iteratively compute the pseudo-inverse of a given matrix .

Assume we have a matrix and its corresponding pseudo-inverse . When a column is added to , i.e.,


then Greville’s algorithm computes the pseudo-inverse of the new matrix recursively. In more detail, the new pseudo-inverse of is given by [33]




where the denotes the conjugate transpose. Notice that our refinement loop adds both a row and a column to the Vandermonde matrix (41), each time a new sample is added. Therefore the procedure described above has to be performed twice, first adding a column , then adding a column to the transpose of the resulting matrix, i.e., .

A recursive algorithm for calculating the pseudo-inverse is necessary to reduce the algorithmic complexity of the adaptive sampling procedure. For an matrix, Greville’s algorithm performs an inverse update with complexity , while computing the full inverse directly has complexity [32] and becomes infeasible quickly in high dimensional spaces , where the number of samples required for constructing an accurate surrogate is high.

Algorithmic complexity scales well for high dimensions

The algorithmic complexity gives an estimate on how the computational effort scales with the number of samples and the dimension of the random space.

The computational complexity is determined by the complexity of the individual parts; updating pseudo-inverse, interpolation, and particle swarm optimisation. First we discuss the algorithmic complexity of a single iteration in the refinement loop.

Updating the pseudo-inverse is achieved by using Greville’s algorithm. Assume a row and column are added to an existing pseudo-inverse of dimensions . Using Greville’s algorithm, this can be done in operations [32] (discussed in section 5).

The interpolation procedure is fairly cheap when the pseudo-inverse is available. In detail, when interpolating on sample locations, solving (41) only requires a matrix-vector multiplication of operations.

The complexity of the particle swarm optimisation is determined by the number of particles used and the maximum number of iterations. In our case we use a particle swarm of particles and a maximum of iterations. Typical values for and are and , respectively. In the worst case, the number of maximum iterations is reached and we have to evaluate the residual at locations, without the guarantee to have found an optimum. Moreover, if we have samples, then the residual is a polynomial of degree and can be evaluated with Horner’s [24] method in operations. This results in a total cost of the particle swarm optimisation for a single iteration of the surrogate construction of , where is the number of terms in the PDE (5).

The total cost of each iteration in the refinement loop is now given by:


Hence, if we run the refinement loop times, a conservative upper bound for the total algorithm complexity becomes


Notice that the complexity depends quadratically on the dimension of the random space. However, the number of samples necessary for achieving a specified accuracy in higher-dimensional random space will generally increase with the number of dimensions, and computational cost will therefore increase faster than quadratic with the dimension of the random space.

6 Results

In this section we present multiple examples that illustrate the efficiency of our method. In order to study and compare convergence properties, two error measures are defined, one for the error in the statistical quantities, i.e., mean, variance, etc., and one for the error in the surrogate.

The error in the

-th statistical moment is the 2-norm of the difference vector between the exact and the approximate statistical moment as computed in (



where the integral is calculated using numerical integration with negligible error, i.e., a tensor based Gauss quadrature rule with 100 nodes in each direction. The uncertainties are assumed to be uniformly distributed,

, unless stated otherwise.

Secondly, the error in the surrogate is computed as follows:


where the sum is taken over a large number of uniform Monte-Carlo samples () in the random space.

The first two test-cases consider a steady and unsteady advection-diffusion equation, respectively. They demonstrate the difference between our approach and conventional empirical interpolation, effect of discretisation on error convergence, difference between refinement measures (24) and (28), and applicability to non-hypercube domains. The third and last test case considers the non-linear shallow water equations and demonstrates how our method can be used in combination with a new state-of-the-art neural network based PDE solver.

Steady-state advection-diffusion equation

In this part we consider a test-case such that the assumptions in theorem 1 hold and we study the following:

  • Comparison of refinement measure (24) and (22).

  • Asymptotic sample distribution.

  • Comparison between NIPPAS and conventional empirical interpolation.

The underlying PDE is the dimensionless steady state advection-diffusion problem given by:


where is the Reynolds number, which is assumed to be uncertain and a function of . The equations are discretised using a finite-difference approach on an equidistant grid with a resolution of with grid points. The solution vector on the computational grid , with for , is obtained by solving the following linear system:




where the boundary conditions enter the discretised equation via the vector . Notice that the discretised PDE satisfies the assumptions of theorem 1. Example solutions for different Reynolds numbers are shown in figure 2.

Figure 2: Example solutions for different Reynolds number. The solution are computed on a computational grid with .

In order for the discretisation to produce stable results, the cell Reynolds number should satisfy , which is satisfied by picking sufficiently small, which is in our case ().

There is no significant difference in convergence between refinement measures and  

The difference between refinement measures and is the incorporation of the scaling term in . This scaling term is in general expensive to compute as a function of and is therefore often infeasible to incorporate in the refinement measure. However, for this simple test-case we can compute this scaling term as a function of and are able to study its effect on sample placement. In order to compare the two refinement measures (24) and (22), three different functional forms for are used, where is uniformly distributed between . The three different functional forms are given by:


and are shown in figure 3. All three functions range from to for , but have completely different shapes, which influence the scaling function and therefore the sample locations when using refinement measure .

A comparison between the two refinement measures and is made by choosing , i.e., we are interested in the entire solution as a function of . The adaptive sample placement starts by placing a single uniformly distributed sample in the random space. The solution values and derivative values and are sampled at the sample location. These values are used to construct the interpolants which are required for computing . The convergence of the error in the surrogate for both refinement measures and the different is shown in figure 3.

Figure 3: The results for the steady-state advection diffusion equation for the two refinement measures and and three different Reynolds functions in the random space. The solutions are computed using .

The error in the surrogate converges exponentially fast to machine precision for both refinement measures without any significant difference. Samples are placed at similar locations for both refinement measures, which is shown in the empirical CDF plot in figure 3. Regarding stability of the interpolant, a clustering of samples occurs at the boundaries of the random space, which stimulates stable interpolation.

Refinement measure uses all the terms that depend on and should be optimal for sample placement, see equations (12) and (22). However, the error in the surrogate is not only determined by sample placement, but also by the polynomial interpolation procedure. Consequently the best refinement measure does not necessarily lead to the best convergence in the error. Including the scaling term in appears to have little effect on the sample placement, at least for this simple test-case. In fact, for this test-case one could refine directly on (because is explicitly known), and this gives the same results as refining based on . As there is no significant difference between the two refinement measures, we use refinement measure in the remainder of this paper, as it is more generally applicable.

NIPPAS converges faster than conventional empirical interpolation depending on the QoI 

We compare convergence of the surrogate for the NIPPAS based on (24) with the conventional empirical interpolation based on (30). The main difference (in absence of incorporation of the PDF) is the fact that empirical interpolation places new adaptive samples based on a residual which is based on the entire solution , whereas the NIPPAS uses a residual based only on the QoI . To compare both methods, the QoI is set to (the solution computed at the middle of the computational grid, with ), and the Reynolds number is given by . The convergence comparison is shown in figure 4.

Figure 4: Convergence comparison for NIPPAS method and conventional empirical interpolation for the steady-state advection diffusion equation.

The NIPPAS enhances the surrogate by focusing on locations that are relevant for the QoI. This leads to faster convergence for the NIPPAS when compared to conventional empirical interpolation. However, in case more solution values from the solution vector would be incorporated, i.e., more non-zero entries in the columns of , the convergence speed-up will decrease and eventually the convergence coincides with the one from empirical interpolation when the QoI uses the entire solution vector .

Unsteady advection-diffusion equation

In the previous example we applied NIPPAS to a steady-state PDE for a 1D random space in order to compare refinement measures and to compare with conventional empirical interpolation. In this section we study the following:

  • The effect of the discretisation method applied to the PDE.

  • The accuracy of NIPPAS for approximating statistical moments in a 2D random space.

  • The construction of a surrogate model on non-hypercube domains.

Therefore, we thoroughly study the NIPPAS for an unsteady advection-diffusion equation with two parameter uncertainties. We start with a 2D hypercube random space and then gradually increase the complexity of the test-case.

The underlying PDE is the 1D advection-diffusion equation, given by


where the advection parameter and the diffusion parameter are assumed to be uncertain and uniformly distributed between . For the problem (59) to be well-posed, initial and boundary conditions are required. A spectral spatial discretisation method [34] is used for solving (59) and the boundary conditions are taken periodic for and the initial condition is given by


The spectral spatial discretisation is performed on an equidistant grid for , where . This results in a solution vector :


where is the approximate solution at grid point . Additionally, taking derivatives can be written in terms of a matrix-vector multiplication


where is the spectral differentiation matrix [34]. As a result, the solution vector can be obtained by solving the semi-discrete problem


which can be rewritten in the form (5) by discretising the time-derivative and formulating the resulting set of equations in matrix form. Notice that this results in a block-diagonal system.

A surrogate is created for the quantity , which is the solution at the right boundary of the physical domain at .

Effect of discretisation method is small 

As we use a spectral method with a fine resolution for the spatial discretisation and the problem is linear, the time discretisation error is expected to be dominant, and therefore the effect of the time discretisation method is studied. The semi-discrete problem (64) is solved using different time discretisation schemes, i.e., backward-Euler, Crank-Nicolson and fourth-order explicit Runge-Kutta (RK4), where we fix the time step at .

To demonstrate the efficiency of the NIPPAS method, convergence is compared to a nested Smolyak grid [35] on the Clenshaw-Curtis nodes. The errors in the surrogate for both the NIPPAS method and the Smolyak solution are computed by using a Monte-Carlo reference solution based on samples. Notice that the reference solution changes, depending on the time discretisation. The reference solution for a Crank-Nicolson time discretisation is shown in figure 5.

Figure 5: Reference solution for the advection-diffusion equation for the quantity based on 5000 samples.

The choice of the time discretisation method affects the accuracy of the black-box solver, and changes the shape of the surrogate slightly, as different time discretisations do not give the same QoI at the exact same location in random space. As a result, when changing the discretisation, a new reference solution has to be computed to study convergence. To further clarify, there is a difference between the exact/wanted surrogate, which is obtained by solving (1), and the discrete exact surrogate, obtained by solving the discretised equations (5). This difference is shown in the following equation:


where is the exact solution (solution of (1)), is the discrete solution computed using grid points (solution of (5)), and is the surrogate based on the discrete solutions. For this reason, we plot both the error in the surrogate with respect to the exact solution and the discrete solution. The results are shown in figure 6.

Figure 6: Error (51) comparison between the Smolyak sparse grid surrogate and the NIPPAS surrogate for different time discretisation methods with and . The errors are plotted with respect to both the discrete solution of (59) and the exact solution of (64).

The error with respect to the discrete solution converges to zero, with a convergence rate that is significantly faster than the Smolyak sparse grid. Convergence is non-monotonic, which is common for adaptive sampling methods [11]. The error with respect to the exact solution stalls before machine precision is reached, which is due to the discretisation error. To clarify, the error with respect to the discrete solution converges to zero and equation (65) therefore states that the observed error after stalling is the discretisation error.

The convergence behaviour of the error with respect to the discrete solution is not dependent on the time discretisation, which indicates that the performance of our method does not depend on the underlying discretisation.

Faster convergence for statistical quantities with PDF weighing 

Next, to study the effect of the two different refinement measures (24) and (28), we now assume independent -distributed input uncertainties and :


where are parameters that characterise the PDF and the constant is chosen such that the PDF has total probability 1 on . Convergence behaviour is studied as a function of the shape of the underlying PDF and therefore we vary the PDF parameters as follows


which results in a total of 625 PDFs with totally different shapes. The convergence statistics of the mean, variance and the entire surrogate are shown in figure 7.

Figure 7: Convergence in advection-diffusion surrogate, mean and variance for the two refinement measures (24) and (28). The error in the surrogate is computed with (51) with 5000 Monte-Carlo samples. The errors in the mean and variance are calculated with (50).

Figure 7 shows the average convergence behaviour taken over the 625 different PDFs. From the figure we may conclude that weighing the residual with the PDF results in slower convergence for the surrogate model, see equation (51), but leads to faster convergence in both the mean and the variance, which are computed using (50). Furthermore, the variation in convergence over all 625 different PDFs, given by the shaded area around the mean line, has similar width for both refinement measures (24) and (28). The choice of refinement measures thus depends on the type of convergence the user requires, i.e., convergence in the surrogate or convergence in the statistical quantities. Notice that the shaded area for the convergence in the mean has a minimum that starts at 0 initially, which is due to the fact that for some PDFs the exact mean is zero, and as the initial sample is placed at a location in the random space where the response is also 0, we start with an approximate mean that is equal to the exact mean. However, the sample placement does not stop immediately, as the stopping criterion (34) is not met.

Method shows fast convergence for non-hypercube domains 

The 2D -distribution (66) just discussed, comprises two independent uncertainties and therefore the space in which we construct a surrogate is a hypercube. However, when dealing with dependent uncertainties, the space in which the surrogate is constructed, is in general not a hypercube, and many existing UQ methods fail when dealing with a complex random space.

To study the performance of the NIPPAS method on more general geometries, we assume a non-hypercube domain with associated uniform PDF . In order to sample the model on the domain , we restrict the residual (24) to by multiplying it with an indicator function:




After altering the residual definition slightly and placing an initial sample randomly in the non-hypercube random space, the NIPPAS method can be applied straightforwardly. Note that the basis functions are the same for the hypercube case, which are given by the Chebyshev polynomials defined on the smallest hypercube that comprises . In order to show the efficiency of our method for non-hypercube domains, we again construct a surrogate for the response shown in figure 5, but now on more complexly shaped domains. Three different geometries with different characteristics are chosen:

  • sharp corners:

  • smooth boundary:

  • domain with holes:

The convergence results are shown in figure 8.

Figure 8: Convergence of surrogate for three different non-hypercube domains.

The results show an exponential convergence behaviour for all three different geometries, but the convergence rates slightly differ, which is likely caused by the choice of basis, which is suboptimal for all domains. Furthermore, weighing the residual with the PDF in case of a non-uniformly distributed uncertainties is again possible, but results are similar to the results shown in figure 7 and are not shown for repetitiveness.

It has to be pointed out that the choice of a lexicographically ordered Chebyshev basis is far from optimal on non-hypercube domains. Nevertheless, good convergence behaviour is still achieved.

Shallow water equations with dependent random inputs

As last test-case, we study the performance of the NIPPAS method for a hyperbolic non-linear PDE with dependent random inputs in a non-hypercube random space. In this test-case the underlying model is non-linear and comprises three uncertain parameters which lie on a 2D triangular manifold in a 3D space and therefore combines the difficult aspects from all previous test-cases. The underlying model is a system of conservation laws, namely the 1D shallow water equations (SWEs). The SWEs describe inviscid fluid flow with a free surface, under the action of gravity, with the thickness of the fluid layer small compared to the other length scales [36]:


where is the free surface height (thickness of the fluid layer), the velocity, and the acceleration of gravity. Reflective boundary conditions are imposed at and the initial condition for the system of PDEs is given by a Riemann problem:


leading to a so-called dambreak problem. The three uncertain inputs are and , and are assumed to jointly follow a Dirichlet distribution, which is the multivariate generalisation of the 1D -distribution [37]

, and is often used as a prior to a discrete categorical distribution in Bayesian statistics. The PDF of the

Dirichlet distribution with shape parameters