1 Introduction
Uncertainty quantification in engineering and applied sciences often requires random field descriptions of spatial variability of uncertain media and loads, etc. Essentially, random field representation is a problem of data compression and reconstruction
. Information of a random field must be compressed in a sufficiently small number of deterministic functions and random variables which can reconstruct the random field with acceptable accuracy.
A frequently used approach is the Karhunen-Loève (K-L) expansion Karhunen1947 ; Loeve1948
(also named as Principal Component Analysis (PCA) in the statistical community) which was originally proposed for representing stochastic processes. A continuous-time stochastic process is represented with countable modes and latent factors. To model spatial variability of a physical quantity in a multidimensional space, the most common approach is directly extending K-L expansion by treating the spatial coordinate as a macro parameter (named as trivial PCA throughout this paper). The modes are eigenfunctions of an integral operator induced by the covariance function and the latent factors are uncorrelated in the sense of second-order moments. A comprehensive review of the methods (including Galerkin-type and collocation-type ones) for computing the modes was made in
Betz2014 . Other procedures such as spectral element projection Oliveira2014 , multilevel finite element method Xie2015 , two-dimensional Haar wavelet Galerkin method Azevedo2018 and high-order polynomial-based Ritz-Galerkin method Zhang2019 were also proposed in recent years. To tackle the difficulties of the previous methods in treating complex geometries, modes were projected onto a subspace spanned by isogeometric basis functions in recent papers Rahman2018 ; Li2018 ; Jahanbin2019 . Meanwhile, several recent works tried to avoid the computationally expensive eigenvalue problem in trivial PCA based on the idea of variable separation. Ghosh et al. Ghosh2014 redefined the covariance function of a spatial-temporal random field, and computed both the temporal and spatial modes with CP and Tucker decomposition, respectively. Zentner et al. Zentner2016 and Guo et al. Guo2016 independently proposed a hierarchical orthogonal decomposition method to split the temporal and spatial modes. Similar idea was also shown in Zheng2017 , however, this work did not really achieve variable separation since the modes still contain multiple coordinates. The stepwise covariance matrix decomposition method Li2019 derives a Tucker-type representation and is only suitable for rank-1 covariance functions.Latent factors also play an important role for describing the probabilistic structure of the random field. For non-Gaussian random fields which widely exist in practice, the latent factors are generally non-Gaussian and have higher-order dependencies, making the representation of their probabilistic structure a nontrivial task. To achieve this task, several methods are dedicated to match the prescribed single-point marginal distribution function and covariance function. Phoon et al. Phoon2005 and Dai et al. Dai2019 proposed similar iterative procedures to update the marginal distribution of each latent factor. Kim et al. Kim2015a
iteratively updated the covariance function of the underlying Gaussian process based on the translation process theory. Other works tried to estimate the probabilistic structure of the latent factors with data-driven procedures
Ganapathysubramanian2008 ; Ma2011 ; Poirion2014 ; Olivier2016 . Since this paper is focused on representing a random field with prescribed statistics, we will not give further comments on these methods.For the computation of modes, although the complex physical domains can be handled by the aforementioned isogeometric analysis (IGA)-based methods, however, the underlying architecture is still the trivial PCA in physical domain, making these methods suffer from the common drawbacks of the trivial PCA-based ones: (1) high computational and memory demand; (2) natural structure and correlation are broken, leading to loss of potentially more compact and useful representations Lu2013 . Variable separation is indeed a promising way for reducing the computational scale of the modes and preserving the natural structure. However, existing methods can only represent random fields defined on domains which have Cartesian product decomposition format (i.e. cases where the random fields belong to structured data), while in many cases the random fields belong to unstructured data. Moreover, these methods generate a large number of latent factors which will bring heavy computational burden for subsequent stochastic analysis. In addition, all the existing methods in both categories requires manually predefining tensor product basis (or collocation points) to discretize the modes and selecting quadrature points to compute the stiffness matrices. Lack of automation in these selections limits generality and adaptivity Gorodetsky2019 .
It is well known that a general non-Gaussian random field should be described with multiple statistics rather than using the covariance function only. The most widely used description which combines the covariance function and single-point marginal distribution function cannot reflect the higher-order and nonlinear correlation structure of the random field. It was pointed out in Morton2009
that joint distributions of multiple points can be non-Gaussian even if each marginal distribution is Gaussian. It is also well known that the cumulant functions with order three or higher are always zeros, making the cumulant functions be important measures of non-Gaussianity. Therefore, it will be better to develop cumulant descriptions of the latent factors based on prescribed cumulant functions.
This paper aims at overcoming the challenges in both aspects mentioned above by proposing novel theoretical and algorithm frameworks. Motivated by the aforementioned IGA-based works, we also use isogeometric transformation to deal with complex domains. However, rather than following the trivial PCA in the physical domain, we propose a new architecture by representing the random field on the parametric domain in tensor train(TT) format. The core of the architecture is a newly developed rank-revealing algorithm for separating variables of the cumulant functions. The need for predefining tensor product basis or collocation points is totally removed, which greatly enhances adaptivity. Moreover, higher-order cumulant tensors of the latent factors can be conveniently computed in a unified framework, which is beneficial for further dimension reduction.
The rest of this paper is organized in five sections. The proposed theoretical framework and the computational procedures are detailed in Section 2 and 3, respectively. In Section 4
, we compare the proposed framework with three related methods. Particularly, the relationship with independent component analysis (ICA) will be discussed since this approach also uses higher-order cumulants for dimension reduction. In Section
5, three examples with increasing dimensions are employed to validate the performance of the proposed framework.2 Theoretical framework of the proposed method
Let
be a probability space where
is a sample space, is a -field on and is a probability measure. Let is the measurable space on the admissible set where each is defined on a bounded domain . A random field is defined as a measurable mapping . To explicitly represent this abstract mapping, an auxiliary measurable space is needed and the corresponding two mappings and are to be found such that . The task is to capture the major part of information in with as small as possible.2.1 Space transformation
The key to overcome the first difficulty is to represent the physical domain with a structured parametric domain. For a wide variety of curve-type, surface-type and solid-type domains, exact coordinate transformations can be constructed by using NURBS-based isogeometric transformation formulated as Eq.(1)
(1a) | ||||||
(1b) | ||||||
(1c) |
where and are NURBS basis functions defined as Eq.(2),
(2a) | ||||
(2b) | ||||
(2c) |
and are control points and and are weights. Each is a -degree (
-order) B-spline basis function. Given a knot vector
(a non-decreasing set of coordinates in the parametric space [0,1]), are defined recursively by the Cox-de Boor recursion formula in Eq.(3)(3a) | ||||
(3b) |
where 0/0=0. Thus, given three ingredients: control points, weights and knot vectors, the isogeometric transformation can be constructed and formulated in a uniform format as Eq.(4).
(4) |
where and .
2.2 Generalized Karhunen-Loève expansion in the parametric space
After representing the physical coordinates with parametric ones, the original random field is transformed to the parametric space. Denote and the corresponding covariance function . We seek to represent with a generalized K-L expansion by improving the hierarchical SVD method in Zentner2016 . Taking the case of =2 as an example, the first step is to split the -modes = from by computing the eigenfunctions of the kernel in Eq.(5).
(5) |
Thus, a rank- decomposition is derived as Eq.(6)
(6) |
where
(7) |
and the corresponding covariance function is a matrix-valued function in Eq.(8).
(8) |
Then, rather than decomposing each component of separately as in Zentner2016 , is treated as whole object by regarding the index as a new coordinate. Next, the -modes of (also the -modes of ), denoting , are split by computing the eigenpairs of , see Eq.(9).
(9) |
Finally, we get the tensor train decomposition of as Eq.(10)
(10) |
where
(11) |
The component functions in Eq.(10) satisfy orthogonal conditions in Eq.(12)
(12a) | ||||
(12b) | ||||
(12c) |
All the results can be directly extended to the case of =3 as Eqs.(13) to (18).
(13) |
where each column of is an eigenfunction of the covariance kernel in Eq.(14),
(14) |
each column of is an eigenfunction of in Eq.(15)
(15) |
and each column of is an eigenfunction of in Eq.(16).
(16) |
(17) |
(18a) | ||||
(18b) | ||||
(18c) | ||||
(18d) |
where is the th eigenvalue of the covariance kernel defined in Eq.(16).
Taking =2 as an example, we have the following theorem:
Theorem 1.
The generalized K-L expansion in Eq.(10) is equivalent to the trivial PCA in the parametric space when .
Proof.
Denote = and = as the sets of 2D modes and eigenvalues in the trivial PCA and = and = as the set of 2D modes in the generalized K-L expansion. We only need to prove that = and = .
First, we prove and . For each , by projecting on ,
we derive
Then, we prove and . For each , we derive
Finally, we get = and = .
∎
Thus, Eq.(10) is mean-square convergent when . in It is easy to verify that this theorem is valid for other values of .
2.3 Representing higher-order cumulants of latent factors
It is well known that the probabilistic structure of a random field is uniquely defined by its family of finite-dimensional marginal distributions. For arbitrary points in parametric space and arbitrary
, according to the A-type Gram-Charlier series, the corresponding marginal probability density function (PDF) can be represented as Eq.(
19)(19) |
where
is the PDF of the Gaussian distribution with mean zero and covariance function
, with subscripts are Hermite polynomials and is the family of cumulant functions consists of elements = (the third-order cumulant function) and so on. Meanwhile, is defined by the family of marginal PDFs by definition. Thus, is equivalent to the family of marginal PDFs by definition in terms of describing the probabilistic structure of a random field. In practical problems, only finite orders of cumulant functions are available. To absorb the information of high-order (3) cumulant functions into the representation of , taking the case of =2 as an example, since the reconstructed random field has the form in Eq.(10), the relationship between the third-order cumulant functions and the third-order cumulants of latent factors is expressed as Eq.(20).(20) |
This relationship can be directly extended to the cases of other values of and cumulant orders. Unfortunately, is generally not orthogonal decomposable Robeva2016 , hence, we have to resort to HOSVD to further reduce the dimensionality of the latent factors. Finally, by combing the coordinate transformation in Eq.(4) with Eqs.(10) (or (13)) and (20), we get a parametric representation of the original random field in the sense of given cumulants.
3 Computational procedure
In this section, we seek an algorithm framework which is general enough to overcome the difficulties in both aspects mentioned in Section 1.
3.1 Space transformation
3.2 Generalized Karhunen-Loève expansion in the parametric space
After representing the physical domain with a structured parametric domain , the core of the new algorithm framework is representing the modes of . This task is accomplished with the following two steps.
3.2.1 Tensor train decomposition of cumulant functions
According to the theoretical framework in the previous section, it will be very beneficial for fast computations if the cumulant functions have separable forms. Borrowing the idea in Savostyanov2014 , we propose a tensor train decomposition algorithm to obtain a low-rank separable approximation to a th () cumulant function .
For the sake of simplicity, is redefined as an auxiliary function where is a permutation of the original coordinates. The idea is reconstruct only by using some of its fibers. More precisely, we seek a separable representation of as Eq.(21)
(21) |
where and are the
th pair of interpolation sets (both have cardinality
which is the th rank). The optimal choice of the th pair of interpolation sets is the solution to Eq.(22).(22) |
However, the search for the maximum-volume submatrix is an NP (non-deterministic polynomial)-hard problem. Hence, we propose a heuristic algorithm to find a quasi-optimal choice of and derive an adaptive tensor train decomposition of a cumulant function, see Algorithm 1.
(23) |
After initialization, the interpolation sets for each two neighboring dimensions are progressively enriched by the left-to-right and right-to-left sweeps until the maximum approximation error is smaller than the prescribed threshold. The interpolation sets enriched in this way are two-side nested, i.e. , , . Then, each fiber in Eq.(21) is adaptively represented in the chebfun format Driscoll2014 . Next, fibers of each two neighboring dimensions form a chebmatrix as in lines 46, 48, 50.
Comments
There are no comments yet.