1 Introduction
Coefficient functions are often one of the main difficulties in modeling a real life problem. Due to the nature of the phenomena, these coefficients can be oscillatory and with high contrast features, all such conditions hinder the development of a fast and accurate numerical solution. Under some conditions, numerical methods like discontinuous Galerkin methods could give robust scheme as in Chung et al. (2018, 2014). However, more importantly, coefficients are not constant and they usually involve uncertainties and randomness. Therefore, a stochastic partial differential equation (SPDE) is considered for the modeling of these stochastic coefficient functions. There are several popular approaches for solving SPDE numerically, for example, Monte Carlo simulation, stochastic Galerkin method, and stochastic collocation method in Babuška et al. (2007); Babuska et al. (2004); Ghanem and Spanos (2003). All these methods require a high computational power in realistic simulations and cannot be generalized to similar situations. In the past decades, machine learning technique has been applied on many aspects such as image classification, fluid dynamics and solving differential equations in Brunton et al. (2020); Kutz (2017); Lu et al. (2021); Zhao and Du (2016); Heinlein et al. (2021); Vasilyeva et al. (2020); Wang et al. (2020); Chung et al. (2021); Yeung et al. (2020). In this paper, instead of directly solving the SPDE by machine learning, we choose a moderate step that involves the advantage of machine learning and also the accuracy from traditional numerical solver.
We first apply the KarhunenLoève expansion on the stochastic coefficient as in Dostert et al. (2006); Wheeler et al. (2011); Zhang and Lu (2004) to capture the characteristic with finite terms. After this expansion, the SPDE is reduced to a deterministic differential equation and can be solved with a balancing domain decomposition by constraints (BDDC) algorithm, which can prevent the computational problem caused by the refinement of spatial mesh resolution and give an accurate solution rather than just using machine learning technique without suitably designed conditions. After obtaining sufficient number of random solutions from the
BDDC method, we can derive the statistics of the original SPDE solution. However, the cost for forming and solving generalized eigenvalue problems in the BDDC algorithm with good accuracy is considerable especially for high dimensional problems. We note that such eigenvalue problems are considered to enrich the coarse space adaptivity in the BDDC algorithm. Therefore, we introduce neural network with
a userdefined number of layers andneurons to compensate the increase in computational cost with unexpected times of BDDC solutions.The adaptive BDDC algorithm with enriched primal unknowns is considered in this paper as its ability on oscillatory and high contrast coefficients has been shown in Kim et al. (2017); Kim and Chung (2015). Among different domain decomposition methods, the considered adaptive BDDC does not require a strong assumption on coefficient functions and the subdomain partitions to achieve a good performance like the standard BDDC algorithm Dohrmann (2003); Mandel et al. (2005)
. It is because the additional coarse space basis functions computed by the dominant eigenfunctions are related to the illconditioning in the standard BDDC algorithm with highly varying coefficients. The introduction of these dominant eigenfunctions
thuscould greatly improve the robustness of numerical scheme for problems with rough and high contrast coefficients. Moreover, an estimate of condition number of this adaptive BDDC algorithm can be shown to be only controlled by a given tolerance without any extra assumptions on the coefficients or subdomain partitions. For simplicity, in the remaining of
the paper, we will call the resulting new numerical scheme combining the machine learning technique and adaptive BDDC algorithm as learning adaptive BDDC algorithm.Finally, to end this section, we state the main idea of our proposed scheme and the model problem. In this paper, we integrate an artificial neural network with learning abilities into a BDDC algorithm with adaptively enriched coarse spaces for efficient solutions of the neural network approximation of stochastic elliptic problems. Let be a bounded domain in , , and
be a stochastic probability space:
(1) 
where and is uniformly positive and is highly heterogeneous with very high contrast. Here, we only assume the twodimensional spatial domain, however, the threedimensional case is also applicable and is shown to be robust for deterministic elliptic problems in Kim et al. (2017). Moreover, as the model problem (1) can be extended to fluid flow problems, we seldom call the coefficient function as permeability function.
The rest of the paper is organized as follows. In Section 2, a brief formulation of the BDDC algorithm with adaptively enriched coarse problems is presented for twodimensional elliptic problems. Then, the KarhunenLoève expansion is introduced and the details of the artificial neural network used are clarified in Section 3. Two concrete examples with an explicit analytical expression of the KarhunenLoève expansion, some network training and testing parameters are presented in the first half of Section 4 and results of various numerical experiments are presented subsequently. Finally, a concluding remark is given in Section 5.
2 Adaptive BDDC algorithm
The main feature of the adaptive BDDC is the local generalized eigenvalue problem defined on every subdomain interface, which introduces the adaptively enriched primal unknowns. In this section, we repeat the formulation presented by the first and second authors in Kim et al. (2017) and give a brief overview of the adaptive BDDC algorithm. We refer to Kim et al. (2017); Kim and Chung (2015) for further details, where the analysis of the condition number estimation and the robustness of numerical scheme with oscillatory and high contrast coefficients can be found.
2.1 Local linear system
We first introduce a discrete form of the model problem (1) in a deterministic fashion. Let be the space of conforming linear finite element functions with respect to a given mesh on with the mesh size and with the zero value on . We will then find the approximate solution such that
(2) 
where
(3) 
We assume that the spatial domain is partitioned into a set of nonoverlapping subdomains , , so that . We note that the subdomain boundaries do not cut triangles equipped for . We allow the coefficient to have high contrast jumps and oscillations across subdomains and on subdomain interfaces. Let be the bilinear form of the model elliptic problem (2) restricted to each subdomain defined as
where is the restriction of to .
In the BDDC algorithm, the original problem (2) is reduced to a subdomain interface problem and solved by an iterative method combined with a preconditioner. The interface problem can be obtained by solving a Dirichlet problem in each subdomain. After choosing dual and primal unknowns on the subdomain interface unknowns, the interface problem is then solved by utilizing local problems and one global coarse problem corresponding to the chosen sets of dual and primal unknowns, respectively. At each iteration, the residuals are multiplied by certain scaling factors to balance the errors across the subdomain interface regarding to the energy of each subdomain problem. The coarse problem aims to correct the global part of the error in each iteration and thus the choice of primal unknowns is important in obtaining a good performance as the number of subdomains increases. The basis for primal unknowns is obtained by the minimum energy extension for a given constraint at the location of primal unknowns and such a basis provides a robust coarse problem with a good energy estimate. We refer to Dohrmann (2003); Li and Widlund (2006); Mandel et al. (2005); Toselli and Widlund (2005) for general introductions to BDDC algorithm.
2.2 Notation and preliminary results
To facilitate our discussion, we first introduce some notation. Let be the Schur complement matrix obtained from the local stiffness matrix after eliminating unknowns interior to , where is defined by , for all
. In the following we will use the same symbol to represent a finite element function and its corresponding coefficient vector in order to simplify the notations.
Recall that is the restriction of the finite element space to each subdomain . Let be the restriction of to . We then introduce the product spaces
where we remark that the functions in and are totally decoupled across the subdomain interfaces. In addition, we introduce partially coupled subspaces , , and fully coupled subspaces , , where some primal unknowns are strongly coupled for functions in or , while the functions in , are fully coupled across the subdomain interfaces.
Next, we present basic description of the BDDC algorithm; see Dohrmann (2003); Li and Widlund (2006); Mandel et al. (2005); Toselli and Widlund (2005). For simplicity, the twodimensional case will be considered. After eliminating unknowns interior to each subdomain, the Schur complement matrices are obtained from and they form the algebraic problem considered in the BDDC algorithm, which is to find such that
(4) 
where is the restriction operator into , and depends on the source term .
The BDDC preconditioner is built based on the partially coupled space . Let be the restriction into and let be the partially coupled matrix defined by
For the space , we can express it as the product of the two spaces
where consists of vectors of the primal unknowns and consists of vectors of dual unknowns, which are strongly coupled at the primal unknowns and decoupled at the remaining interface unknowns, respectively. We define such that
where is the mapping from to and is the restriction from to . We note that is obtained as
where is the restriction from to and is the space of dual unknowns of .
The BDDC preconditioner is then given by
(5) 
where is a scaling matrix of the form
Here the matrices are defined for unknowns in and they are introduced to make the preconditioner robust to the heterogeneity in across the subdomain interface. In more detail, consists of blocks and , where denotes an equivalence class shared by two subdomains, i.e., and its neighboring subdomain , and denotes the end points of , respectively. We call such equivalence classes and as edge and vertex in two dimensions, respectively. In our BDDC algorithm, unknowns at subdomain vertices are included to the set of primal unknowns and adaptively selected primal constraints are later included to the set after a change of basis formulation. For a given edge in two dimensions, the matrices and satisfy a partition of unity property, i.e., and , where denotes the set of subdomain indices sharing the vertex . The matrices and are called scaling matrices. As mentioned earlier, the scaling matrices help to balance the residual error at each iteration with respect to the energy of subdomain problems sharing the interface. For the case when is identical across the interface , and are chosen simply as multiplicity scalings, i.e., , but for a general case when has discontinuities, different choice of scalings, such as scalings or deluxe scalings, can be more effective. The scaling matrices can be chosen using similar ideas. We refer to Klawonn et al. (2016) and references therein for scaling matrices.
2.3 Generalized eigenvalue problems
After the preliminaries, we are ready to state the generalized eigenvalue problems which introduce the adaptive enrichment of coarse components. For an equivalence class shared by two subdomains and , the following generalized eigenvalue problem is proposed in Klawonn et al. (2014):
(7) 
where and are the block matrices of and corresponding to unknowns interior to , respectively. The matrix are the Schur complement of obtained after eliminating unknowns except those interior to . In addition, for symmetric and semipositive definite matrices and , their parallel sum is defined by, see Anderson and Duffin (1969),
(8) 
where is a pseudo inverse of . We note that the problem in (7) is identical to that considered in Dohrmann and Pechstein when are chosen as the deluxe scalings, i.e.,
We note that a similar generalized eigenvalue problem is considered and extended to threedimensional problems in the first and second authors’ work Kim et al. (2017).
3 Learning adaptive BDDC algorithm
In addition to the BDDC algorithm, we perform the KarhunenLoève (KL) expansion to decompose the stochastic permeability for the preparation of the learning adaptive BDDC algorithm. As shown in Zhang and Lu (2004)
, only a few terms in the KL expansion are already enough to approximate the stochastic permeability with a reasonable accuracy, which can be efficiently reduce computational cost. Once we obtain the KL series expansion, we can easily produce a certain amount of sample data as a training set for the artificial neural network. After the training process, the neural network captures the main characteristics of the dominant eigenvectors from the adaptive BDDC algorithm, and neural network approximation can be obtained. In our proposed algorithm, we still need some
resulting samples from the deterministic BDDC algorithm to train the neural network, which is a process of a relatively high computational cost. However, after this setup, we can apply this network to obtain an approximate solution directly without the use of other stochastic sampling methods such as Monte Carlo simulation, which involves even higher computational cost for the generation of large number of random samples or realizations.In this section, we discuss the proposed algorithm in details. Moreover, our resulting neural network can be also applied on problems with similar stochastic properties. It thus saves the cost of training again and we can readily use the trained neural network for prediction. Some testing samples will be generated to test the network performance and to show its generalization capacity in the later section.
3.1 KarhunenLoève expansion
To start our learning adaptive algorithm, we first apply the KL expansion on the stochastic field . To ensure the positivity of stochastic permeability field, a logarithmic transformation is considered as . Let be the covariance function of at two locations and . Since the covariance function is symmetric and positive definite, it can be decomposed into:
(9) 
where and are eigenvalues and eigenfunctions computed from the following Fredholm integral equation:
(10) 
where are orthogonal and deterministic functions. It is noted that for some specially chosen covariance functions, we can find the eigenpair analytically. We will list some examples in the section of numerical results. After solving the Fredholm integral equation, we are ready to express the KL expansion of the log permeability:
(11) 
Here is the expected value of and
are identically independent distributed Gaussian random variables with mean 0 and variance 1. As mentioned above, a few terms in KL expansion can give a reasonably accurate approximation, which can be explained now as we can just pick the dominant eigenfunctions in (
11) and the other eigenvalues will decay rapidly.Therefore, KL expansion is an efficient method to represent the stochastic coefficient and flavour the subsequent process. Although analytical eigenfunctions and eigenvalues cannot always be found, there are various numerical methods to compute the KL coefficients in Schwab and Todor (2006); Wang (2008). After the generation of some sets of
, which are the only stochastic variables in the permeability after KL expansion, as the input of neural network, we plug back KL expanded permeability coefficients into the adaptive BDDC algorithm, and the resulting dominant eigenfunctions in the coarse spaces will be the target output of the our designated neural network.
3.2 Neural network
There are many types of neural networks for different usages. Since in our learning adaptive BDDC algorithm the desired neural network is to capture numeric feature from the data as a supervised learning,
a fully connected feedforward neural network is chosen. An illustration of the neural network structure is shown in Figure 1. In a general feedforward neural network, there are hidden layers between the input and output layer, and the arrows from layers to layers indicate that all the neurons in the previous layer are used to produce every neuron in the next layer. Computational cost increases with , but there are no significant improvements in predictions of our tests, is thus chosen throughout numerical experiments.We denote the number of neurons in the input layer to be , which is the number of terms in the truncated KL expansion. The inputs are actually the gaussian random variables in one realization. The output of the network is a column vector that consists of all dominant eigenvectors in the coarse space. The number of neurons in the output layer is denoted as , which is the sum of length of all resulting eigenvectors obtained from the deterministic BDDC algorithm. This value depends on many factors, such as the geometry of the spatial domain, the structure of the grid, and parameters of the BDDC algorithm.
In the training, we use the scaled conjugate gradient algorithm in Møller (1993) to do the task of minimization. Therefore, unlike the gradient descent method, we do not need to determine the value of learning rate at each step. However, the initial choice for the scaled conjugate gradient algorithm may still affect the performance of training
. Before training, we have to generate training samples and decide some conditions on stopping criteria. When the minimization process is completed for all the training samples, we are said to finish one epoch. There are two stopping criteria that are usually considered.
They are the minimum for the cost function gradient and the maximum number of epochs trained. When either of these conditions are satisfied, the training is stopped and the network will be tested for performance on a testing set.In both training and testing results, in order to obtain an error measure less affected by scales of data set, a normalized root mean squared error (NRMSE) is used as an error of reference to see how well are the network estimation and prediction:
where is a column vector of random variables that
follow the standard normal distribution in the
th realization, is the dominant eigenvectors obtained from the adaptive BDDC algorithm when the KL expanded stochastic permeability function is used, and is the regression function describing the neural network (NN). Here, is the dimension of network output, and is the number of samples. For simplicity, the considered NRMSE can be understood as the root mean squared error divided by the maximum element of the absolute difference of all the eigenvectors.Overall, the proposed scheme can be concluded as follows:

Step 1: Perform KL expansion on the logarithmic stochastic permeability function

Step 2: Generate realizations of and obtain the corresponding BDDC dominant eigenvectors , which are the training data for the neural network

Step 3: Define training conditions and train the neural network

Step 4: Examine the network performance whether the NRMSE is satisfied, otherwise, go back step 3 and change training conditions
4 Numerical results
In this section, supporting numerical results are presented to show the performance of our proposed learning adaptive BDDC algorithm. We will consider various choices of permeability coefficients with two major stochastic behaviours. As the concern of our numerical tests are on the learning algorithm, we fix all the parameters that are not related. In all the experiments, is chosen to be a unit square spatial domain, and it is partitioned into 16 uniform square subdomains with as the coarse grid size. Each subdomain is then further divided into uniform grids with a fine grid size . In the following, we will first describe the considered stochastic coefficients and training conditions.
4.1 Choices of stochastic coefficients
From the KL expansion expression (11), we know the eigenvalues and eigenfunctions computed from the Fredholm integral equation are related to the covariance function of , which can be treated as the starting point of KL expansion. Throughout the experiments, two specially chosen covariance functions whose eigenvalues and eigenfunctions in the Fredholm integral equation can be computed analytically are considered with different expected values. For and , these two covariance functions are

Brownian sheet covariance function:
where the corresponding eigenvalues and eigenfunctions are

Exponential covariance function:
where and are the variance and correlation length in the direction of the process, respectively. The corresponding eigenvalues and eigenfunctions are
where is the th positive root of the characteristic equation
in the direction. And the same for , which is in the th positive root of the characteristic equation in the direction. After arranging the roots and in ascending order, we can immediately obtain a monotonically decreasing series of , which flavours our selection of dominant eigenfunctions in the truncated KL expansion with terms.
For clarification, the positive indices and are mapped to index so that is monotonically decreasing. It is noted these eigenvalues and eigenfunctions are just the product of solutions of the onedimensional process in the Fredholm integral equation, since the covariance functions are separable.
4.2 Training conditions
In the training, there are different sets of parameters that could affect the performance of the neural network. We list down as follows:

Number of truncated terms in KL expansion:

Number of hidden layers:

Number of neurons in the hidden layer: 10

Stopping criteria

Minimum value of the cost function gradient: or

Maximum number of training epochs:


Sample size of training set:

Sample size of testing set:
After we obtain an accurate neural network from the proposed algorithm, besides the NRMSE of testing set samples, we also present some characteristics of the preconditioner based on the approximate eigenvectors from the learning adaptive BDDC algorithm, which include the number of iterations required, minimum and maximum eigenvalues of the preconditioner. To show the performance of preconditioning on our predicted dominant eigenvectors, we will use the infinity norm to measure the largest difference and also the following symmetric mean absolute percentage error (sMAPE), proposed in Makridakis (1993), to show a relative error type measure with an intuitive range between 0% and 100%:
where are some target quantities from preconditioning and is the same type of quantity as from preconditioning using our predicted eigenvectors. In the following subsections, all the computation and results were obtained using MATLAB R2019a with Intel Xeon Gold 6130 CPU and GeForce GTX 1080 Ti GPU in parallel.
4.3 Brownian sheet covariance function
The first set of numerical tests bases on the KL expansion of Brownian sheet covariance function with the following expected functions :


.
Expected function is a random coefficient as is randomly chosen for each fine grid element, and expected function is a smooth trigonometric function with values between . The common property between these two choices is that the resulting permeability function is highly oscillatory with magnitude order about to , which are good candidates to test our method on oscillatory and high contrast coefficients. Here, we show the appearance of mean functions and and the corresponding permeability function in Figure 2. We can observe that from the first row of Figure 2 to the second row, that is, from the appearances of expected function to the logarithmic permeability coefficient , there are no significant differences except only a minor change on the values on each fine grid element. Nevertheless, these small stochastic changes cause a high contrast after taking exponential function as in the third row of Figure 2.
In Table 1, the training results of these two expected functions and under the training conditions mentioned are presented. Although the random initial choice of conjugate gradient algorithm for minimization may have influences, in general, the neural network of expected function usually needs more epochs until stopping criterion is reached. However, this increase in training epochs does not bring a better training NRMSE when compared with expected function . We can see the same phenomenon when the testing set is considered. The main reason is due to the smoothness of expected function , the neural network can capture its characteristics easier than a random coefficient.
Case  Epochs trained  Training NRMSE 

2.29e+05  1.97e02  
5.39e+04  3.77e03 
In the testing results, we consider another set of data containing samples as a testing set. After obtaining the predicted eigenvectors from neural network, we plug it into the BDDC preconditioned solver and obtain an estimated preconditioner. For the estimated preconditioner, we report several properties such as the number of iteration needed for the iterative solver, the maximum and minimum eigenvalues of the preconditioned system. Therefore, to better show the performance of network, besides the testing error NRMSE, we also use the sMAPE and norm of difference of the iteration numbers, the maximum and minimum eigenvalues of the estimated preconditioner and the target preconditioner. The comparison results are listed below:
Case  Testing NRMSE  sMAPE ( error) in  

Iteration number  
6.58e02  6.94e02 ( 2 )  2.58e07 ( 4.02e06 )  4.45e03 ( 3.58e02 )  
6.73e03  6.93e02 ( 1 )  3.19e05 ( 1.50e04 )  2.01e03 ( 4.94e02 ) 
It is clear that the testing NRMSE of expected function is also much smaller than expected function . One possible reason is because the function used is smooth, and the neural network can better capture the property of the resulting stochastic permeability function. We can see that values of the testing NRMSE of both expected function and expected function are just a bit larger than the training NRMSE. This suggests that the neural network considered is capable to give a good prediction on dominant eigenvectors in the coarse space even when the magnitude of coefficient function value changes dramatically across each fine grid element. Nevertheless, even though a more accurate set of eigenvectors can be obtained for the expected function , but its performance after the BDDC preconditioning is not always better than that of expected function . Obviously, errors in the aspects of and are larger for the expected function .
Detailed comparisons can be found in Figure 3, where the area of overlapping represents how close is the specified quantity of the estimated preconditioner compared to the target preconditioner. In the first row of figures, we can see the number of iteration required for the predicted eigenvectors is usually more than the target iteration number, however, the differences in the iteration number is not highly related to the difference in minimum and maximum eigenvalues. We can see more examples to verify it in this and later results. The bin width of the histogram of minimum eigenvalues for expected function is about , which is in coherence with the error, but we eliminate this small magnitude effect and the sMAPE is considered. The high ratio of overlapping area for expected function confirms the usage of sMAPE as a good measure, as the resulting sMAPE 2.57e07 of expected function is much smaller than 3.19e05 of expected function .
For the last column of errors, the norm of maximum eigenvalues is larger for expected function . Nevertheless, it does not mean that the neural network of expected function
performs worse. From its histogram, we can see an extremely concentrated and overlapping area at the bin around 1.04, moreover, there are some outliers for the prediction results, which is one of the source for a larger
norm. Therefore, the sMAPE can well represent the performance of preconditioning on the estimated results, and we can conclude that both neural networks show a good results on oscillatory and high contrast coefficients, which can be represented by NRMSE and sMAPE. In the next subsection, besides an artificial coefficient, we will consider one that is closer to a real life case.Although the norm seems not describe the characteristics of performance in a full picture, it is still a good and intuitive measure, for example, in the difference of iteration number, we can immediately realize how large will be the worst case.
4.4 Exponential covariance function
To test our method on realistic highly varying coefficients, we consider expected functions that come from the second model of the 10th SPE Comparative Solution Project (SPE10), with the KL expanded exponential covariance function as the stochastic source. For clarification, in this set of experiment, we first use the modified permeability of Layer 35 in the direction of SPE10 data as the expected function with in the exponential covariance function . We then train the neural network and the resulting neural network is used to test its generalization capacity on the following different situations:

Different stochastic behaviour:

All remain unchanged except


Different mean permeability function:

The expected function is changed to the Layer 34 in the direction of SPE10 data, but the stochastic parameters are unchanged, i.e.,

Note, all the permeability fields used are modified into our computational domain and shown in Figure 4 for a clear comparison. In the Layer 35 permeability field, four sharp properties can be observed. There are one blue strip on the very left side, and a reversed but narrower red strip is next to it. In the top right corner, a low permeability area is shown and just below it, a small high permeability area is located in the bottom right corner. So a similar but with slightly different properties of Layer 34 are chosen to test the generalization capacity of our neural network.
Before showing the training and testing records of neural network, we further present the logarithmic permeability and permeability when and in Figure 5. The sets of used in these two column of realizations are the same. We can thus see the change in parameters does cause the stochastic behaviour to change, too. We present the training and testing records of neural network as below in Table 3 and Table 4, where we specify the expected functions considered in the first column.
Case  Epochs trained  Training NRMSE 
Layer 35  2.62e+05  1.52e02 
Case  Testing NRMSE  sMAPE ( error) in  

Iteration number  
Layer 35  2.48e02  3.65e02 ( 1 )  7.52e06 ( 7.50e05 )  1.36e03 ( 6.66e02 ) 
Layer  2.27e02  3.89e02 ( 1 )  7.57e06 ( 7.17e05 )  1.27e03 ( 6.59e02 ) 
Layer 34  5.08e02  2.52e03 ( 1 )  3.25e05 ( 1.74e04 )  3.06e02 ( 8.47e02 ) 
In Table 4, we clarify that the results of Layer also use Layer 35 permeability as the expected function but with . Each row corresponds to the results of different testing sets, however, all the results are obtained using the same neural network of Layer 35 in Table 3. Therefore, the testing results of Layer 35 are used as a reference to decide whether the results of Layer and Layer 34 are good or not.
We first focus on the testing results of Layer 35. From the corresponding histograms in the left column of Figure 6, besides the iteration number required for predicted eigenvectors are still usually larger by 1, we can observe a large area of overlapping, in particular, there is only about 10% of data that are not overlapped in the histogram of maximum eigenvalues. Moreover, the small NRMSE and sMAPE again confirm with the graphical results. For generalization tests, we then consider two other testing sets with different stochastic behaviour and expected function.
When the stochastic behaviour of Layer 35 is changed to , we list the corresponding sMAPE and norms of differences in iteration numbers, maximum eigenvalues and minimum eigenvalues of preconditioners in the second row of Table 4. We can see the values of errors are very similar to the first row and the NRMSE is even smaller. Therefore, we expect that the Layer 35 neural network can generalize the coefficient functions with similar stochastic properties, and give a good approximation on dominant eigenvectors for the BDDC preconditioner. It is then verified by the histograms in the right column of Figure 6. Moreover, a large population of samples are concentrated around 0 in the histograms of difference between the estimated preconditioner and target preconditioner in Figure 7. All these show our method performs well with stochastic oscillatory and high contrast coefficients, and also coefficient functions with similar stochastic properties.
Finally, we compare the performances when the expected function is changed to Layer 34. Note, the NRMSE, sMAPE and errors of Layer 34 all are much higher than those of Layer 35, which are crucial clues for a worse performance. Although the permeability fields of Layer 34 and Layer 35 share some common properties and have a near magnitude, unlike previous results for Layer 35 and Layer , we can see in Figure 8, the prediction output and target output on maximum and minimum eigenvalues of preconditioners are even two populations with different centres and only a little overlapping area. The main reason is a minor difference in can already cause a huge change after taking exponential function. Therefore, when is changed to another entirely different mean permeability with just some common properties, the feedforward neural network may not be able to capture these new characteristics, which will be part of our future research.
5 Conclusion
A new learning adaptive BDDC algorithm is introduced and it consists of three main parts, which are the Gaussian random variables in KarhunenLoève expansion, artificial neural network and the dominant eigenvectors obtained from the adaptive BDDC algorithm in the coarse spaces. The considered neural network acts as a fast computational tool, which turns the input Gaussian random variables into predicted dominant eigenvectors as output. In addition, the neural network in the proposed algorithm is suitable for other permeability coefficients with similar stochastic properties for generalization purpose. Numerical results confirm the efficiency of the proposed algorithm and the generalization abilities. On the other hand, we currently just use the simplest feedforward neural network structure without retraining
when new characteristics are entered. In our future plan, we will improve the neural network and include deep learning technique for higher accuracy in prediction and preconditioning, and also better generalization ability on wider aspects of applications.
References
 [1] (1969) Series and parallel addition of matrices. J. Math. Anal. Appl. 26, pp. 576–594. External Links: ISSN 0022247x, MathReview (P. L. Hammer) Cited by: §2.3.
 [2] (2007) A stochastic collocation method for elliptic partial differential equations with random input data. SIAM Journal on Numerical Analysis 45 (3), pp. 1005–1034. Cited by: §1.
 [3] (2004) Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM Journal on Numerical Analysis 42 (2), pp. 800–825. Cited by: §1.
 [4] (2020) Machine learning for fluid mechanics. Annual Review of Fluid Mechanics 52, pp. 477–508. Cited by: §1.
 [5] (2021) A multistage deep learning based algorithm for multiscale model reduction. Journal of Computational and Applied Mathematics, pp. 113506. Cited by: §1.
 [6] (2018) An adaptive generalized multiscale discontinuous galerkin method for highcontrast flow problems. Multiscale Modeling & Simulation 16 (3), pp. 1227–1257. Cited by: §1.
 [7] (2014) An adaptive gmsfem for highcontrast flow problems. Journal of Computational Physics 273, pp. 54–76. Cited by: §1.
 [8] Modern domain decomposition solvers: BDDC, deluxe scaling, and an algebraic approach(2013), http://people.ricam.oeaw.ac.at/c.pechstein/pechsteinbddc2013.pdf. In , Cited by: §2.3.
 [9] (2003) A preconditioner for substructuring based on constrained energy minimization. SIAM J. Sci. Comput. 25 (1), pp. 246–258. External Links: ISSN 10648275, Document, MathReview Cited by: §1, §2.1, §2.2.
 [10] (2006) Coarsegradient langevin algorithms for dynamic data integration and uncertainty quantification. Journal of computational physics 217 (1), pp. 123–142. Cited by: §1.
 [11] (2003) Stochastic finite elements: a spectral approach. Courier Corporation. Cited by: §1.
 [12] (2021) Combining machine learning and domain decomposition methods for the solution of partial differential equations  a review. GAMMMitteilungen 44 (1), pp. e202100001. Cited by: §1.
 [13] (2015) A BDDC algorithm with enriched coarse spaces for twodimensional elliptic problems with oscillatory and high contrast coefficients. Multiscale Modeling & Simulation 13 (2), pp. 571–593. Cited by: §1, §2.
 [14] (2017) BDDC and FETIDP preconditioners with adaptive coarse spaces for threedimensional elliptic problems with oscillatory and high contrast coefficients. Journal of Computational Physics 349, pp. 191–214. Cited by: §1, §1, §2.3, §2.
 [15] (2014) FETIDP with different scalings for adaptive coarse spaces. Proceedings in Applied Mathematics and Mechanics (), pp. . External Links: ISSN 10648275, Document, Link, MathReview (Abdeslem H. Bentbib) Cited by: §2.3.
 [16] (2016) A comparison of adaptive coarse spaces for iterative substructuring in two dimensions. Electron. Trans. Numer. Anal. 45, pp. 75–106. External Links: ISSN 10689613, MathReview Entry Cited by: §2.2.
 [17] (2017) Deep learning in fluid dynamics. Journal of Fluid Mechanics 814, pp. 1–4. Cited by: §1.
 [18] (2006) FETIDP, BDDC, and block Cholesky methods. Internat. J. Numer. Methods Engrg. 66 (2), pp. 250–271. External Links: ISSN 00295981, Document, MathReview (Răzvan Răducanu) Cited by: §2.1, §2.2.
 [19] (2021) DeepXDE: a deep learning library for solving differential equations. SIAM Review 63 (1), pp. 208–228. Cited by: §1.
 [20] (1993) Accuracy measures: theoretical and practical concerns. International journal of forecasting 9 (4), pp. 527–529. Cited by: §4.2.
 [21] (2005) An algebraic theory for primal and dual substructuring methods by constraints. Appl. Numer. Math. 54 (2), pp. 167–193. External Links: ISSN 01689274, Document, Link, MathReview Cited by: §1, §2.1, §2.2.
 [22] (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6 (4), pp. 525–533. Cited by: §3.2.
 [23] (2006) KarhunenLoève approximation of random fields by generalized fast multipole methods.. Journal of Computational Physics 217 (1), pp. 100–122. Cited by: §3.1.
 [24] (2005) Domain decomposition methods—algorithms and theory. Springer Series in Computational Mathematics, Vol. 34, SpringerVerlag, Berlin. External Links: ISBN 3540206965, MathReview Cited by: §2.1, §2.2.
 [25] (2020) Learning macroscopic parameters in nonlinear multiscale simulations using nonlocal multicontinua upscaling techniques. Journal of Computational Physics 412, pp. 109323. Cited by: §1.
 [26] (2008) Karhunenloève expansions and their applications.. Ph.D. Thesis, London School of Economics and Political Science (United Kingdom). Cited by: §3.1.
 [27] (2020) Deep multiscale model learning. Journal of Computational Physics 406, pp. 109071. Cited by: §1.
 [28] (2011) A multiscale preconditioner for stochastic mortar mixed finite elements. Computer methods in applied mechanics and engineering 200 (912), pp. 1251–1262. Cited by: §1.
 [29] (2020) A deep learning based nonlinear upscaling method for transport equations. arXiv preprint arXiv:2007.03432. Cited by: §1.
 [30] (2004) An efficient, higherorder perturbation approach for flow in randomly heterogeneous porous media via karhunenloeve decomposition. J. Comput. Phys 194 (2), pp. 773–794. Cited by: §1, §3.

[31]
(2016)
Spectral–spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach
. IEEE Transactions on Geoscience and Remote Sensing 54 (8), pp. 4544–4554. Cited by: §1.
Comments
There are no comments yet.