Modeling Material Stress Using Integrated Gaussian Markov Random Fields

11/06/2019 ∙ by Peter W. Marcy, et al. ∙ 0

The equations of a physical constitutive model for material stress within tantalum grains were solved numerically using a tetrahedrally meshed volume. The resulting output included a scalar vonMises stress for each of the more than 94,000 tetrahedra within the finite element discretization. In this paper, we define an intricate statistical model for the spatial field of vonMises stress which uses the given grain geometry in a fundamental way. Our model relates the three-dimensional field to integrals of latent stochastic processes defined on the vertices of the one- and two-dimensional grain boundaries. An intuitive neighborhood structure of said boundary nodes suggested the use of a latent Gaussian Markov random field (GMRF). However, despite the potential for computational gains afforded by GMRFs, the integral nature of our model and the sheer number of data points pose substantial challenges for a full Bayesian analysis. To overcome these problems and encourage efficient exploration of the posterior distribution, a number of techniques are now combined: parallel computing, sparse matrix methods, and a modification of a block update strategy within the sampling routine. In addition, we use an auxiliary variables approach to accommodate the presence of outliers in the data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 9

page 14

page 16

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Scientists who study the mechanics of materials at Los Alamos National Laboratory are interested in the fracture properties of real as well as simulated tantalum. Physical experiments conducted at the macro-scale (i.e., much larger than single-crystal length) provide loading conditions to be used in meso-scale (where single crystals can be resolved) computational dynamic models of this material. The need for the computational models stems from the following reasoning. The outcome of the physical shock-loading experiments are tantalum plates with nucleated pores; the plates provide information on the end result, but provide no insight as to the mechanics of the process leading to the damage. Because the initiation mechanisms are not directly measurable but still of interest, scientists have developed constitutive models of the internal stress-strain relationships to partly bridge this gap. Owing to different phenomenology, models exist at both the meso- and macro-scales. The ultimate goal of the researchers is to link meso- and macro-scale computational models in order to understand pore nucleation in tantalum. A necessary first goal is to understand the spatial distribution of stress within the idealized meso-scale tantalum immediately preceding damage formation.

This paper represents the first step in a recent collaboration of theoretical materials scientists and statisticians to better understand the non-stationary spatial distribution of stress within a simulated collection of tantalum grains that have been subjected to shock-loading conditions. The materials scientists initially sought out statisticians due to the fact that many material properties and mechanisms are statistical in nature. Further, while the materials modelers are able to write constitutive equations for the stress-strain relationships within tantalum, these equations do not naturally describe the spatial variability throughout a representative volume, and in particular, near grain boundaries. Because the three-dimensional distribution of stress throughout the grain network is important for the understanding of damage initiation, the authors have begun to explore empirical, data-driven (meta-)models. This work represents our initial attempt to parameterize the tantalum stress fields through a nontraditional spatial statistical model. Given the uniqueness of the dataset, the novel model it suggests, and the variety of practical computational techniques necessary to fit said model, we think this paper will be of general interest to a wider applied statistical audience.

While there are many potential flavors of spatial model to choose from, there are three important aspects of the simulated tantalum dataset that narrow the search. First is the natural neighborhood structure provided by the finite element techniques used to calculate the stress: two spatial locations can be considered neighbors if their volume elements touch. The second consideration is the sheer amount of data: the huge amount of output produced by the materials codes is an immediate and sizable hurdle for the fitting of any potential spatial statistical model. The third consideration is the non-stationary nature of the stress fields: variability tends to increase near grain boundaries. All three of these aspects point to the use of Gaussian Markov random fields (GMRFs) for the tantalum stress fields. GMRF models can use a neighborhood structure to specify the entries of the precision (as opposed to the covariance) matrix. Not having to repeatedly invert the covariance matrix then leads to computationally efficient inference for large and possibly non-stationary spatial datasets [17, 15].

On the practical side, it is possible to fit a wide class of (latent) GMRF models using the R-INLA software. This R

package facilitates approximate Bayesian inference using an integrated nested Laplace approximation

[18]

. However, this software cannot be used when there are more than four hyperparameters and/or the likelihood is sufficiently complicated. In addition, the methodology of

[15]

cannot be used for non-stationary models when lacking knowledge of stochastic partial differential equations (SPDEs).

In this paper we define a unique and intricate Bayesian non-stationary GMRF model for simulated tantalum which precludes the use of R-INLA

and does not use SPDEs. It is based on an equation relating the 3D stress field to integrals of latent stochastic processes defined on the vertices of 2D and 1D grain boundaries, and its Bayesian inference requires modified sampling strategies within the Markov chain Monte Carlo (MCMC) routine. The modifications we propose are directly generalizable to other high-dimensional GMRFs whose inference necessitates MCMC.

We start by providing more details of the materials science background in Section 2. It will provide much needed context for the structure and nature of the dataset. Next we specify our intricate model in Sections 3 and 4. The sampling routines for a full Bayesian analysis are presented in Section 5, and results of the analysis are presented in Section 6. We end with conclusions and comment on the need for extensions.

2 Computational Materials Science Background

The data we analyze come from a physical model (to be distinguished from our statistical “meta”-model) of how tantalum crystallites (grains) respond to stress loading conditions. Specifically, it is a thermo-mechanically coupled elasto-viscoplastic single-crystal constitutive model in which the dominant physical feature is the interaction of crystals within the polycrystal volume. A more in-depth description of the mathematical equations can be found in [5, Sec. 3.2]. The collection of coupled differential equations were solved numerically on a representative volume of tantalum grains using Abaqus 6.12 [19].

Figure 1: A cutout of the full representative volume (left) showing the subset of 18 fully internal grains (right). Blue indicates low (535 MPa) and red indicates high (1147 MPa) vonMises stress.

The finite element discretization and boundary conditions used by the numerical solvers were formulated using experimental measurements. A representative three-dimensional polycrystalline cube (sidelength 165

m) with 70 tantalum grains was generated with an open-source software utility Dream3D

[9] using information derived from two-dimensional electron-backscatter diffraction data of [5]. In general, the experimental orientation image maps from the three primary orthogonal directions of a material are used by Dream3D to train a statistical model of the grain size, shape, and orientation for the microstructure. This model can be used by Dream3D to generate as many statistically equivalent microstructures as needed. In our case, the resulting volume contained 577,445 conformally meshed tetrahedral computational elements on 106,904 vertices/nodes. The finite elements along grain boundaries are smaller in order to provide a higher resolution of the spatio-temporal mechanics in these regions. The meshing algorithms within Dream3D are not perfect and are constantly being refined by the developers. For instance, some jagged artifacts of the conformal meshing can be seen where three grains meet (Figures 1, 4, and 10), but this was the state of our spatial data at the time of generation.

The time-dependent boundary conditions for the compressive forces acting perpendicular to the cube-faces of the computational polycrystal were derived from macro-scale simulations of plate impact experiments of [4]. The loading history was imposed up to the point at which the simulations suggested pore nucleation in the tantalum-on-tantalum plate experiments. This is the sole time point for which we consider the output.

The output of Abaqus includes many state variables including the tensor-valued quantities: strain, strain rate, and stress. In this study, we consider only vonMises stress (in units megapascals, MPa), a scalar reduction of the Cauchy stress tensor, because of its direct relevance to material yield and damage initiation

[16].

Figure 2: The variability of vonMises stress decreases as distance from boundaries increases.

The spatial domain was subsetted to remove potential boundary effects induced by the forces acting upon the cube’s faces. We consider the 18 complete grains which do not intersect any cube-face; these internal grains contain a still formidable 94,274 tetrahedral elements on 60,966 nodes (10,851 of these being boundary nodes) The subsetted data are shown in Figures 1 and 10.

Initial explorations of the stress field revealed a few important features to include in our statistical model. First, grains tend to have their own baseline vonMises stress, suggesting a grain-specific mean. Next, there is clear spatial correlation with grain boundaries appearing to generate areas of both high and low stress. However, the effect of grain boundaries is not entirely consistent in that some boundaries can be seen to separate areas of relatively similar stress while others separate high and low stress regions. A decay in variability as a function of distance to second- and third-order grain boundaries can be seen in Figure 2. (Second-order boundaries are surfaces between two grains while third-order boundaries are intersection curves between three grains.) Finally, there are outliers of (usually high) stress which are unlike surrounding values – this suggests that any reasonable statistical model will need an error distribution with heavy tails. In the next section we detail a model that can accommodate all the above features.

3 Bayesian Hierarchical Model: Likelihood

We now define a scalar random field model for vonMises stress throughout the 18-grain volume. Let be the vonMises stress at spatial location , and let be the grain at location . The idealized version of our model relates the expected stress at a location in three-dimensional space to averages of unobserved functions of one and two dimensions:

(1)

where is the baseline stress in grain , , and is the Euclidian distance between spatial locations and in . is the second-order boundary of grain , i.e., the two-dimensional manifold where grain contacts neighboring grains, and is the domain of the unobserved function . Hence, the integral over is a surface integral. Similarly, represents the third-order boundary of grain , i.e., the union of one-dimensional manifolds where grain simultaneously contacts any two (or more) neighboring grains. It is the domain of the function , and the corresponding integral is therefore a sum of line integrals.

Figure 3 displays domains and for a simple geometry of three grains. Each point on a second-order boundary which is not a higher-order boundary (e.g., the smaller dots in the left panel of the figure) is represented exactly twice in the integrals of (1); each point on a third- or higher-order boundary (e.g., the larger dot) is represented six times. The red lines in the center and right panels of the figure connect the multiple representations. Having distinct functions defined on a common boundary allows for different local spatial behavior on either side of the boundary. Furthermore, the exponential kernel in the integrals of (1) permits a distance weighted averaging that can capture decaying trends such as those in Figure 2.

Figure 3: A simple geometry of three grains (left panel). The second- and third-order boundary domains for the integrals of (1) are displayed in the center and right panels (respectively). The red lines connect the multiple representations of the original four boundary points.

The relation defined in (1) is clearly an integral equation, and for this reason is “idealized” because its parameter set is actually a collection of functions . In fact, from a mathematical perspective, it is a Fredholm integral equation of the first kind [21, Ch. 8], and because the kernel is symmetric, this problem of recovering the unknown functions is called deconvolution. Furthermore, when the kernel parameters and are not given, the scenario is referred to as blind deconvolution [22]. From a statistical perspective, by assuming the functions are stochastic processes (and being careful with the definition of the integral) one specifies a “kernel mixing” or “process convolution” model [11, 1]. Our model in (1) differs from the typical process convolution because the domain of integration is of lower dimension than the spatial domain; the expected stress throughout the volume will be determined completely by the behavior at the one- and two-dimensional grain boundaries.

Our actual model utilizes the discrete geometry from the finite element approximation. The spatial location of a particular observed datum is taken to be the centroid of its tetrahedral computational element, i.e. the coordinate average of the four vertices/nodes. Moreover, the entire representative volume (cube of 70 grains) is conformally meshed with tetrahedral elements so that the region where two grains meet is a common collection of triangles. That is, each two-dimensional manifold is approximated by a set of nodes belonging to tetrahedral faces of the boundary. The region where three grains meet is a line segment defined by a collection of common vertices, and all such line segments for a grain form . Note that . Consequently, the process (and ) will be characterized at the discrete node points in () to carry out a quadrature approximation of (1).

The model for vonMises stress of element having centroid within grain is given by

(2)
(3)

Let

represent a column vector of the elements in

, sorted by increasing node index. Similarly, let represent the sorted vector of elements in . Let and be the combined vectors of second- and third-order grain boundary processes for all grains; dim and dim. This defines a vectorized form of the model:

(4)

where is a -vector of 1’s; ; gives the grain index of element and gives the second-order boundary node index corresponding to -index ; is an indicator function. The matrix is defined similarly; note that the indicator function makes and block diagonal. By definition, a node in a third-order boundary is also a node in a second-order boundary, implying that columns of will be columns of for . Multicollinearity is also expected within the columns of because of the large number of small tetrahedra on the boundaries. In the applied mathematics, the inverse problem is then said to be ill-conditioned (on top of being ill-posed). Hence, we will allow for correlated entries through their prior. More on this in Section 4.1.

Before we explicitly write the likelihood we need one more result.

3.1 Auxiliary Variables for Heavy-Tailed Errors

The Student-

distribution can be interpreted as a scale-mixture of normal distributions due to the following result.

Fact 1.

If and , then the quantity

This implies that and provide marginal Student- distributions: . Hence we obtain a regression model with heavy-tailed errors by introducing an auxiliary/latent-data vector (having the same length as the original data ) which can easily be updated with a Gibbs step [e.g., 8, 7, Sections 12.1 and 17.2].

Finally, the likelihood for the parameters of our model is

(5)

The matrices and depend on and , respectively. The priors for the parameters above will be given next.

4 Bayesian Hierarchical Model: Priors

4.1 Priors on and

Most kernel convolution models feature a Gaussian mixing distribution, but extensions are possible (see e.g. the overview and references in [1]). In particular, instead of assuming the elements of are normal, we will allow for spatial correlation (to counter the ill-conditioned nature of the convolution) by assuming the entire vector follows a multivariate normal distribution corresponding to a Gaussian Markov Random Field (GMRF). Recent examples of GMRFs as priors in deconvolution-style inverse problems include [11, 13, 14, 2, 23, 24]. The random field models for and will determine a field for conditional upon and . In fact, the latent fields can be quite rough, and the resulting “integrated” field for vonMises stress will be continuous, perhaps even differentiable.

A GMRF facilitates computational efficiency for large spatial models by being defined directly through a sparse precision (inverse covariance) matrix [17, 15]. GMRF models are naturally applicable when the data possess a neighborhood (vertex and edge) structure: the th precision entry will be non-zero if and only if the neighborhood includes edge . In essence, the computationally efficient inference comes at the cost of having to be defined through conditionalmoments. This stands in contrast to traditional Gaussian process models which are easily specified through unconditional means and covariances, but whose associated parameter estimation is computational infeasible for large datasets.

Figure 4: The 18 grain volume with boundary triangulations visible.

The geometry of the tetrahedrally meshed 18-grain volume lends itself naturally to a GMRF on the second- and third-order boundaries. We describe the model for , and the extension to will be abundantly clear. There can be many thousand second-order nodes in each set, and this means a large . However, there is a very intuitive neighborhood structure: two nodes are neighbors iff they are vertices of a common tetrahedral element. For this reason is assumed to be a GMRF on , and further, that the entire collection of grain boundary processes are statistically dependent on one another through partial correlations at their shared boundaries. This is to say, the entire vector is a GMRF whose definition necessitates further notation. Let denote the neighbors of node on the boundary of grain . Let denote the neighboring grains of grain , and denote only those neighboring grains that also share node , i.e., .

It is assumed that , conditional on all other elements of , only depends on and . The first set represents within-grain neighbors (“wgn”): the same boundary process at physically adjacent nodes; the second set is the between-grain neighbors (“bgn”): different processes at the same node. Again, refer to Figure 3 in which the bgn’s are connected with red lines.

The model for the random field ( and its parameters defined analogously) is

(6)
(7)

where is the mean of the process, constant over all grain boundaries. The cardinalities and are precisely the number of wgns and bgns of , and so it can be observed that the matrix is diagonal-dominant when and

(8)

Diagonal-dominance is a sufficient condition for the positive definiteness of . A similar GMRF parameterization was advocated by [20] for use with multiple, distinct neighbor types.

Properties of GMRFs (e.g., Rue and Held (2005), Theorem 2.3) can be used to gain insight into the parameterization above:

(9)

The parenthesized factor in (9) is a weighted average of the neighbors, where controls the relative weight between the wgns and bgns. The then sets the proportion of the weighted average to use for the conditional auto-regression, i.e., it specifies how smooth the

process is across neighbors. The conditional variance is a function of the number of neighbors, and intuitively these quantities are inversely proportional. Though the

process is not actually stationary, controls the conditional precision: smaller leads to a larger variance.

4.2 Remaining Priors and Hyperpriors

Again, in order to get a Student- likelihood under the hierarchical specification, we need the latent variables to be distributed

inverse-gamma; we also let the degrees of freedom vary:

The other priors and hyperpriors are as follows

The parameters of the log-normal distributions were chosen so that the prior median and mean of

would be 0.6 and 0.8, and similarly, such that these quantities for

would be 0.8 and 1.0. The parameters of the beta distribution were chosen so that the prior mean and mode of the variables would be 0.8 and 0.9, implying somewhat smooth latent processes. Using (

8), and need to be greater than -0.75 and -0.5 (respectively), but using these as lower bounds in the uniform priors led to poor conditioning of the precision matrices, especially for moderately large or . Relatively diffuse priors were used for the remaining (hyper)parameters.

5 Computational Details

We again point out that the impressive computational efficiency afforded by the use of R-INLA for GMRF models was not available to us for a few reasons. First, the fact that and are unknown means that the the design matrices and are not fixed (which is not currently supported by R-INLA). Second, the complexity of the prior precision for and in (7) cannot easily be accommodated. Last, the INLA approximation accuracy is expected to suffer due to the number of hyperparameters associated with the both of these random fields. Hence, we use MCMC sampling to explore the posterior distribution of the full set of parameters. In the next two subsections we give the updates within one iteration of the MCMC for the model defined by the likelihood (5) and the priors in Section 4. We use a Metropolis-within-Gibbs approach whereby groups of parameters are updated using their full conditional distributions; a Metropolis step is used for those sets of parameters whose full conditional is not a known distribution. Specific details about implementation are mentioned last.

5.1 Update of the Latent Fields and Hyperparameters

As pointed out in [13], “Finding an efficient method for updating the [Markov random] field turns out to be an interesting problem.” [12] and [17] Sec. 4.1.2 address this problem through the use of block updates; the authors report mixing within the MCMC that is superior to that produced by hybrid Gibbs updates, and for no additional computational cost. Employing such a strategy in our case would entail the joint update of each random field and its hyperparameters, i.e. in two blocks: first and then . The reasoning will be detailed after introducing some more notation; the discussion focuses on the field but applies directly to the field as well.

Suppose that all hyperparameters of the unobserved random field are collected in the vector ; also let denote the density of the full conditional posterior evaluated at given and all other quantities. Within a Metropolis step, the jumping rule defined by

(10)

has the same acceptance probability as the

update alone because the density values associated with ’s full conditional cancel in the acceptance ratio.

The preceding algorithm assumes that the entire field can be sampled at once, which is not the case for the current analysis. This is because the full conditional of is normal with mean and covariance that depend on the inverse of , which is a large dense matrix [8, Equation 13]. It is also worth noting that even the formation of the full cross-product is not feasible because the design matrices are relatively dense. We thus propose an extension of the block update strategy of [17], pg. 143.

Instead of proposing a candidate from the density (i.e. the entire vector all at once), our modification uses subblock proposals with a joint acceptance of the whole collection; the resulting Metropolis step then has jumping rule:

(11)

The full conditional densities above are exactly those associated with sequential Gibbs updates for the corresponding blocks (the dependence upon and the remaining parameters was again suppressed). It can easily be seen that under this proposal rule, the conditional posterior terms will not cancel in the acceptance ratio meaning that density evaluations are necessary for a complete update of : evaluations as prescribed by (11), and more from the detailed balance computation, i.e., swapping the roles of “” and “”. We have observed that this added computational cost is justified by superior exploration of the posterior. (Note: what we call “blocks” Rue and Held call “subblocks”, implying that our “subblocks” would be something like “sub-subblocks” in their terminology.) Let and denote, respectively, the subblock and it’s complement in . Let other vectors/matrices be defined in a similar manner: for example, is the submatrix of with columns corresponding to ; is the submatrix of the prior precision obtained by keeping rows and removing columns corresponding to the block of .

To compute the jumping rule in (11), a closed form for is necessary. Using properties of multivariate normal distributions [17, Thm. 2.5] and some tedious but standard Bayesian calculations, it follows that

(12)

A few remarks are in order. Despite being sparse, is dense because of the cross-product term. However, the inverse of this posterior precision is manageable because is specified to give a reasonable number of columns. This implies that is quite large, but the sparsity of the prior precision makes easy to compute. The partial residual depends on a potentially unwieldy , but this can be efficiently managed by updating the full residual

(13)

for each of the subblocks. To update the subblock of , set , draw the new subblock , and then update and . Note that to evaluate the density and obtain a new draw from the full conditional only one Cholesky decomposition of is needed.

The other term necessary to compute the jumping rule in (11) is . For this function we use a multivariate normal density (centered on the previous value) in the transformed space

Above,

is the quantile function of a standard normal variate and the “0.4” and “1” terms come from uniform prior on

; by design, all transformed variables have unbounded support. The resulting is symmetric and will thus cancel in the joint Metropolis acceptance ratio, but the prior densities must be adjusted according to a change-of-variables. Transforming the hyperparameters and the inclusion of into the joint update were essential adjustments to allow for adequate mixing of the MCMC algorithm.

5.2 Remaining Updates

The last parameter of the process which is not updated jointly with has full conditional distribution

0 and being the prior mean and variance. In what follows, again denotes the full residual (13), with entry . The parameters associated with the grain means have full conditionals

The parameters of the error distribution of have full conditionals

The parameter can easily be updated using a univariate normal density as a jumping rule. The updates provided in this section are computationally trivial and represent a tiny percentage of the total time associated with one iteration of the MCMC.

Figure 5: Trace plots (after discarding the burn-in iterations and then back-transforming) for one parameter and three hyperparameters associated with the process using grains as subblocks.

5.3 Remaining Details of Implementation

The choice of subblocks affects the performance of the MCMC routine. Heuristically, one should use subblocks which are expected to be correlated within, minimally correlated between, and as large as possible to gain the sampling efficiency of the joint update strategy (

10). On the other hand, using smaller parameter patches can be faster computationally. For each of the and vectors we were able to use one subblock per grain; this led to a maximum dimension of for . We found that subblocks of roughly 200 coefficients was faster but produced chains which wandered a bit more.

The choice of hardware and software also affect computational performance. In this study we used a Mac Pro desktop with a 3.5 GHz, 6-Core Intel Xeon E5 processor and 32GB of memory. To fully harness the multicore capabilities, R was compiled against OpenBLAS 0.2.18 to provide default parallelization. This was necessary for two reasons. First, the parameters and were not fixed which implies that each iteration involves multiplying and exponentiating terms for the matrix and for the matrix– even automated parallel matrix algebra can buckle under such a burden. Second, to use one subblock per grain, a matrix cross-product and Cholesky decomposition involving the large is necessary. Parallelized matrix linear algebra is indispensable in such a scenario. Sparse matrices were handled using the Matrix package within R 3.2; it was found that this particular software outperformed the functionality provided by both spam and SparseM. Even with the considerable gains derived from the careful implementation within R, a full round of updates still takes about 16 seconds. Using smaller blocks, one iteration could be sped up to 6 seconds, but we opted for bigger blocks to aid sampling efficiency.

The parameters requiring a multivariate normal Metropolis jumping rule, i.e. , and , had their proposal covariances tuned during the burn-in period according to the method of [10]. After a block of 500 iterations, the covariance was adjusted by the appropriate multiplier necessary to get an acceptance rate of , as suggested by [6]; this was done for 20 such blocks. An additional 5000 samples from fixed proposal covariances were used for the burn-in period, implying a total of 15,000 not used in the final analysis. The MCMC routine was run for 15,000 more iterations, and every fifth sample was recorded.

6 Results

Figure 6: Hexbin plot of predicted versus observed for the last iteration of the MCMC; the red dashed one-to-one line is displayed for visual aid.
Figure 7: Cutouts of the 18 grain domain with observed vonMises stress (left) and predicted values (right) for the last iteration of the MCMC.
Figure 8: Raw and standardized residuals for the last iteration of the MCMC.
Figure 9: One dimensional marginal posterior distributions for various parameters.

Aside from the 94,274 auxiliary variables of necessary for the Student- likelihood, there are a total of parameters/hyperparameters. Of these, the convergence of the random fields and is somewhat hard to assess, but we did not observe anything conspicuously egregious in the trace plots of randomly selected entries of the vectors. Trace plots for the hyperparameters of the process are shown in Figure 5. The (qualitative) adequacy of the chains’ character is a direct result of our sampling routine.

Every fifth iteration of the MCMC we monitored some quantities to assess the model’s goodness-of-fit. For example, adjusted coefficients of determination were calculated; after the burn-in period, relative to a constant mean model was above 0.98 and the relative to a grain-specific mean model was above 0.95.

For one such iteration (the last of the MCMC), additional diagnostic plots are given in Figures 6, 7, and 8. (The reason for using only one iteration, as opposed to say the mean field , is the presence of : the mean field will not correspond to the mean of the auxiliary variables , and this will invalidate residual-based diagnostics.) The plots comparing observed and predicted values corroborate the high values to suggest that the model fits the data well, even adjusting for the huge number of parameters. We did not use any cross-validation based diagnostics for two reasons. First, the additional computations for a -fold cross-validation were deemed undesirable, given the already taxing MCMC. Second, and more importantly, using a hold-out subset of the data for validation purposes will almost certainly not capture the relevant generalization error: that which is associated with data on new/different grain geometries. This type of model assessment can only be derived from verification with further simulated datasets.

Figure 8 shows the residuals for the last iteration of the MCMC. The raw residuals indicate some extreme absolute errors relative to the range of the data, but these are indeed a small minority. They also display inappropriate tail behavior implying that any constant-variance normal error model will not be suitable. The left panel stands in contrast to the right panel of Figure 8 which favors the use of a Student- likelihood (the th residual is standardized by ). We note however that a further examination of the residual field reveals some potential spatial correlation, especially near grain boundaries; this will be an avenue for future investigation and improvement.

Marginal posterior distributions for some of the parameters are displayed in Figure 9. A few noteworthy features are present. First, the small values of are further evidence for a heavy-tailed error distribution. Next, the quantity is related to a correlation length, and in this light the reciprocal of the posterior mode is very large: the effect of the process “reaches across” the entirety of each grain. The process mean is centered on zero, meaning that a second-order boundary can either elevate or lower expected vonMises stress; suggests that third-order boundaries are more often tied to the elevation of stress. Both parameters are small implying that the processes are actually quite rough, and plots of and (not presented here) were visually indistinguishable from noise. Both parameters are negative implying an inhibitory effect within collection of and . This means that the values of two processes on the same grain boundary are negatively correlated (in a conditional sense); they are not in fact independent. The posteriors of both parameters push up against the lower bounds of the prior which is undesirable as well as unexpected– it provides some evidence of model inadequacy despite the favorable diagnostics presented earlier.

7 Conclusions

This paper represents a first step towards understanding vonMises stress fields and hence damage initiation within polycrystalline tantalum, a process of great interest to scientists at Los Alamos National Laboratory. Because the constitutive mechanical equations dictating the material response did not satisfactorily describe the visible spatial variability to the materials scientists, we proposed an empirical statistical model for the complicated, high-dimensional and rich simulated tantalum dataset. The data’s size and tetrahedrally meshed geometry (hence a pre-existing neighborhood structure), strongly motivated the use of Gaussian Markov random fields. However, within our unique model, the variability throughout the entire 3D volume was dictated by latent and interacting GMRFs defined on lower dimensional 2D and 1D grain boundaries. As such, the scientists’ intuition of grain boundary importance was built directly into the model. We also allowed for a heavy-tailed error distribution so that outliers did not have undue influence.

Our novel GMRF model required a careful Bayesian implementation and we proposed the use of a modified block updating scheme for the latent fields. Sparse matrix functionality and parallel computing methods were vital to the performance of the MCMC routine.

After fitting the model we encountered some surprising results. We had expected the latent processes at the second- and third-order boundaries (the and fields) to be smooth (large hyperparameters), and their influence to have a moderate to quick decay away from the boundaries (indicated by hyperparameters). However, the data unequivocally encouraged extremely rough latent fields that, through small decay parameters, averaged out to a smooth 3D mean field that was visually quite similar to the observed. Also, despite good visual agreement between observed and predicted, the parameters had modes at the bounds required to guarantee precision matrix invertibility. This means that, loosely speaking, the model had to “stretch” to accommodate the type of variation present in the data. On the other hand, the roughness of the latent fields implies a large effective number of parameters (in and ) which is a sign of overfitting. This means that more thought needs to be given to how the latent fields can be smoothed (“regularized”) in a principled way beyond our current hyperparameter specification.

Past exploring the issues mentioned in the previous paragraph, there is considerable room for improving this (or any) statistical model of stress within simulated shock-loaded tantalum. For instance, future work will have to accommodate substantially larger datasets (on the order of many millions of spatial locations), tensor-valued output, and grain orientation as a covariate. Another major issue to consider is the dependence of hyperparameters upon the given tetrahedral meshing. Mesh-invariant model specification is a difficult problem from the GMRF perspective (see e.g., [3] and [15]), but one whose solution would allow for better generalizability and predictions upon new grain geometries with arbitrary mesh structure. A possibility is to ignore the mesh geometry and instead favor notions of distance and correlation length, but one is then immediately back to the problem of specifying a non-stationary model for big spatial data. The difficulties within both mesh-based or distance-based approaches makes this an important and challenging area of research and application.

Figure 10: The 18 grains partially separated and rotated about the spatial axis to show variability of vonMises stress on grain boundaries; the progression goes from the top-left to the bottom-right panel. (The spatial axes are defined by the cube faces of Figure 1 with the axis being the vertical.)

References

  • [1] S. Banerjee, B. P. Carlin, and A. E. Gelfand (2015) Hierarchical modeling and analysis for spatial data. Second edition, Chapman & Hall / CRC, Boca Raton. Cited by: §3, §4.1.
  • [2] J. M. Bardsley (2013) Gaussian markov random field priors for inverse problems. Inverse Problems and Imaging 7 (2), pp. 397–416. Cited by: §4.1.
  • [3] J. Besag and D. Mondal (2005) First-order intrinsic autoregressions and the de wijs process. Biometrika 92 (4), pp. 909–920. Cited by: §7.
  • [4] C.A. Bronkhorst, G.T. Gray III, F.L. Addessio, V. Livescu, N.K. Bourne, S.A. McDonald, and P.J. Withers (2016) Response and representation of ductile damage under varying shock loading conditions in tantalum. Journal of Applied Physics 119, pp. 085103 1–14. Cited by: §2.
  • [5] C.A. Bronkhorst, B.L. Hansen, E.K. Cerreta, and J.F. Bingert (2007) Modeling the microstructural evolution of metallic polycrystalline materials under localization conditions. Journal of the Mechanics and Physics of Solids 55, pp. 2351–2383. Cited by: §2, §2.
  • [6] A. Gelman, G.O. Roberts, and W.R. Gilks (1996) Efficient metropolis jumping rules. In Bayesian Statistics 5, J.M. Bernardo, J.O. Berger, A.P. Dawid, and A.F.M. Smith (Eds.), pp. 599–607. Cited by: §5.3.
  • [7] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin (2014) Bayesian data analysis. Third edition, Chapman & Hall / CRC, Boca Raton. Cited by: §3.1.
  • [8] J. Geweke (1993) Bayesian treatment of the independent student- linear model. Journal of Applied Econometrics 8 (Supplement) (Supplement), pp. S19–S40. Cited by: §3.1, §5.1.
  • [9] M. A. Groeber and M. A. Jackson (2014) DREAM.3D: a digital representation environment for the analysis of microstructure in 3D. Integrating Materials and Manufacturing Innovation 3:5. External Links: Document Cited by: §2.
  • [10] H. Haario, E. Heikki, and J. Tamminen (1999) Adaptive proposal distribution for random walk metropolis algorithm. Computational Statistics 14, pp. 375–395. Cited by: §5.3.
  • [11] D. M. Higdon (2002) Space and space-time modeling using process convolutions. In Quantitative Methods for Current Environmental Issues, C. W. Anderson, V. Barnett, P. C. Chatwin, and A. H. El-Shaarawi (Eds.), pp. 37–56. Cited by: §3, §4.1.
  • [12] L. Knorr-Held and H. Rue (2002) On block updating in markov random field models for disease mapping. Scandinavian Journal of Statistics 29 (4), pp. 597–614. Cited by: §5.1.
  • [13] H. K.H. Lee, D. M. Higdon, Z. Bi, M. A. R. Ferreira, and W. Mike (2002) Markov random field models for high-dimensional parameters in simulations of fluid flow in porous media. Technometrics 44 (3), pp. 230–241. Cited by: §4.1, §5.1.
  • [14] H. K.H. Lee, D. M. Higdon, C. A. Calder, and C. H. Holloman (2005) Efficient models for correlated data via convolutions of intrinsic processes. Statistical Modelling 5, pp. 53–74. Cited by: §4.1.
  • [15] F. Lindgren, H. Rue, and J. Lindström (2011) An explicit link between gaussian fields and gaussian markov random fields: the stochastic partial differential equation approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (4), pp. 423–498. Cited by: §1, §1, §4.1, §7.
  • [16] N. S. Ottosen and M. Ristinmaa (2005) The mechanics of constitutive modeling. Elsevier Science, Amsterdam. Cited by: §2.
  • [17] H. Rue and L. Held (2005) Gaussian markov random fields: theory and applications. Chapman & Hall / CRC, Boca Raton. Cited by: §1, §4.1, §5.1, §5.1, §5.1.
  • [18] H. Rue, S. Martino, and N. Chopin (2009) Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 71 (2), pp. 319–392. Cited by: §1.
  • [19] Simulia (2012) Abaqus 6.12/cae user’s manual. Dassault Systèmes, Providence, RI. Cited by: §2.
  • [20] C.B. Storlie, B.J. Reich, W.N. Rust, L.O. Ticknor, A.M. Bonnie, A.J. Montoya, and S.E. Michalak (2017) Spatiotemporal modeling of node temperatures in supercomputers. Journal of the American Statistical Association 112 (517), pp. 92–108. Cited by: §4.1.
  • [21] G. Wahba (1990) Spline models for observational data. SIAM Press, Philadelphia. Cited by: §3.
  • [22] D. Wipf and H. Zhang (2014) Revisiting bayesian blind deconvolution.

    Journal of Machine Learning Research

    15, pp. 3775–3814.
    Cited by: §3.
  • [23] R. Zhang, C. Czado, and K. Sigloch (2013) A bayesian linear model for the high-dimensional inverse problem of seismic tomography. Annals of Applied Statistics 7 (2), pp. 1111–1138. Cited by: §4.1.
  • [24] R. Zhang, C. Czado, and K. Sigloch (2016) Bayesian spatial modelling for high dimensional seismic inverse problems. Journal of the Royal Statistical Society: Series C (Applied Statistics) 65 (2), pp. 187–213. Cited by: §4.1.