Inertial confinement fusion (ICF), a technique for generating nuclear fusion reactions by heating and compressing a deuterium-tritium (DT) filled capsule, has been a focus of nuclear fusion research for decades . Many modern ICF experiments are designed using computer simulations that approximate the real physical processes that occur during capsule implosion. However, when attempting to model applications where the underlying physics is not well understood—as is the case in ICF, where extreme temperatures and pressures exceed K and Mbar, respectively—simulations often perform poorly, and are not always validated by experimental data . Moreover, ICF experiments are expensive to run, meaning that generating large sets of experimental data to validate simulation results is not always feasible.
Machine learning (ML) offers a novel framework for analyzing data from ICF experiments and simulations. Although the use of ML algorithms in the realm of ICF is relatively new, it has demonstrated some early successes. Using supervised ML techniques trained on a multi-petabyte dataset of ICF simulations, Peterson et al.  identify a new class of ovoid-shaped implosions that consistently achieve high yield in simulations despite the presence of hydrodynamic instabilities. Humbird et al. 
train a deep neural network (DNN) surrogate model for low-fidelity ICF simulations and apply transfer learning, a technique in which models already trained on one dataset are partially re-trained to solve different but related tasks, to obtain a surrogate model for high fidelity models and experiments. Hsu et al. apply ML regression methods to experimental ICF data (the same dataset analyzed in Section III) to analyze relationships between experimental outputs of interest.
In this work, we utilize a random forest (RF) predictor  to identify relationships between controllable design inputs and experimental outputs from ICF experiments performed at the National Ignition Facility (NIF). The prediction model is then used to assess the sensitivity of predicted outputs to the design inputs in order to identify the design features most strongly related to changes in output. This importance analysis can be used to augment the understanding of expert designers and provide insight to improve future designs. Section II and III introduce the ML methods and data, respectively, used in this work. Section IV looks at prediction on the full set of outputs and assess the importance of design features to prediction across output metrics. Section V presents individual analyses of low- and high-density hohlraum gas fill shots. We finish by summarizing and discussing future work in Sections VI and VII.
Ii Machine Learning Background
RF regression is an ensemble ML method that employs multiple decision trees to produce highly accurate predictions on medium-to-large data sets. Decision trees are popular due to their efficiency and adaptability, but perform poorly on unseen data sets . RFs reduce over-fitting by averaging over multiple decision trees: each tree is fit to a random sample of the full training data, and for each split of the tree, a random subset of the full features is considered . RFs exhibit low generalization error on large data sets, and perform better than individual decision trees on both seen and unseen data [5, 8]. We use RFs instead of DNNs for this work because RFs typically outperform DNNs on small datasets such as ours, and because they are computationally cheaper to train than other models such as DNNs and Gaussian Processes (GPs).
To analyze the input-output relationships encoded in the RF model, we use Accumulated Local Effects (ALE) . ALE is an ML metric for interpretability, which refers to the ease with which a human can understand why an ML model makes the decisions that it does, and, consequently, the extent to which humans can predict the model’s results [10, 11]. Interpretability is essential to the safety of many systems (such as driverless cars) and, for scientific applications, necessary in order to extract meaningful scientific knowledge from the model’s behavior . For ICF analysis, feature importance rankings are a crucial component of model interpretability because they reflect input-output relationships that can augment subject matter expert understanding of the physical processes.
ALE is a model-agnostic measure that describes the extent to which each feature influences the model’s predictions. The variance of this function averaged over the other features – the main effect of a feature – can be used to compare the relative importance of the features. This parallels variance-based sensitivity analysis like Sobol indices
, but ALE estimates feature importance by analyzing how much the model’s predictions change over a small range of each feature, then averaging and accumulating these differences over the prediction space. ALE provides consistent estimates of the main effect of the features even when features are correlated, as they often are in the case of ICF data.
We train our regression model on data from 140 experiments conducted at the NIF beginning in 2011. Our work utilizes 21 design parameters simultaneously in order to predict each of four experimental outputs: total yield, velocity, from (we refer to this parameter as simply ), and gated X-ray bang time (referred to here as BT). Total yield represents the actual measured yield for fusion neutrons, corrected for the small portion of neutrons that lose energy due to scattering as they pass through the ice layer. Velocity (in /ns or km/s) refers to the implosion velocity. (in g/) is a measure of fuel thickness, calculated as the product of average fuel mass density and fuel radius (assuming a spherical shape for the fuel within the capsule). BT (in ns) is the time at which the fusion neutrons were produced in the experiment, measured according to the time at which X-rays come out of the capsule.
The recorded experiments were performed with a variety of ignition capsule designs and ablator materials. Over this time period, experimental design systematically evolved resulting in improved performance (see Fig. 1). In particular, hohlraum design was improved by switching from high density gas fills (group I) to low density gas fills (group II). We define high gas fill density as any value greater than 0.6 mg/. Expert opinion and previous work  indicate that, due to the significant physical differences between group I and II shots, separate analysis of each group may improve model prediction and provide insight as to the effects of the switch from high to low gas fills. In Section IV, we analyze RF performance on both groups together, while Section V includes a separate analysis of model prediction quality and feature importance rankings for each group.
As with any statistical analysis, our results are only as good as the available data. Uncharacterized inputs, such as surface roughness, will not be considered by the ML algorithm. Because the data contains missing values for some of the recorded experimental variables, we pre-process the dataset using iterative imputation from the scikit-learn package in Python
, which employs Bayesian ridge regression to estimate (i.e.
impute) missing values using the remaining, observed features. The choice of imputation method is important, as missing data that is replaced with arbitrary values can falsely skew model results. Iterative imputation can provide a more informed estimate of missing feature values than methods such as zero-imputation or mean-imputation.
The data includes experimental uncertainties for three of the four output quantities studied in this work: total yield,
, and BT. The physical origin of the reported uncertainties is not noted in the data; however, we treat each uncertainty measurement as one standard deviation () to be conservative. The data contains no reported errors for velocity because implosion velocity is not measured directly, but rather inferred via surrogate experiments. Following expert recommendation, we use /ns as the uncertainty for all reported values of velocity based on analysis of method for inferring implosion velocity 111Velocity error is derived from error in the original surrogate convergent ablator experiments and from error in the gated X-ray bang time measurements. The velocity of a DT layered implosion is inferred via a surrogate convergent ablator that uses X-ray radiography to observe capsule radius as a function of time. Using this information along with measured values of X-ray bang time, the velocity of the DT layered implosion can be estimated using a combination of simulations and physics arguments. Using this method, errors in the velocity measurements are principally composed of error in convergent ablator measurement and X-ray bang time measurements, and are typically between 10-15 /ns. Improved quantification of measurement uncertainty would improve the assessment of quality of fit. For all four outputs, we incorporate these uncertainty values into our analysis to provide an indication of whether our model’s predictions fall within experimental uncertainty bounds.
Finally, the data convolves sensitivities to physics mechanisms, such as laser-plasma instabilities and asymmetry and hydrodynamic instability growth with high impact systematic design changes, such as hohlraum design, capsule fill, and laser wavelength tuning. As will be discussed later, ML methods and data analysis methods will identify the most dominant or important features, whether physics-driven or design-driven.
Iii-a Correlated Variables
The matrix in Fig. 2 quantifies correlation between different design parameters, confirming that there are several highly-correlated input variables. Chief among these are the three time-based parameters (start final rise, start peak power, and end pulse) as well as parameters describing hohlraum dimensions (Dante 1 diameter, hohlraum length, and hohlraum diameter). Experts informed us that the time-based parameters are likely to be correlated with ablator thickness as a design choice, since thicker ablators generally require a longer push at peak power; as a result, experimenters typically select later times for start final rise and end pulse for shots with thicker ablators. Furthermore, hohlraum dimension parameters are highly mutually correlated with one another due to the limited number of hohlraum designs used at NIF over the time period in which the experiments were performed.
Correlated parameters can “share” importance in a way that falsely skews importance rankings. For example, if variables and both have a strong effect on the model’s decisions but are highly correlated, any importance metric (including ALE) will divide the importance among both variables, resulting in relatively low importance rankings for both and even though the true experimental impact of both parameters may be much higher. Following expert recommendation to account for such correlations, the following five input variables were removed from our dataset: start final rise, start peak power, end pulse, Dante 1 diameter, and hohlraum diameter. Ablator thickness and hohlraum length were maintained. A more rigorous assessment of correlated parameters and ML analysis to inform physical relationships is reserved for future work (see Section VI).
Iv-a Prediction Quality
Like Hsu et al. , we use Mean Absolute Error (MAE), , and Explained Variance () to evaluate model performance. We substitute Root Mean Square Error (RMSE) for Mean Square Error (MSE) since RMSE has the same units as MAE. Model performance results on all four output variables are summarized in Table I. Fig. 3 shows aggregated train-test results for total yield, velocity, , and BT. (Note that for total yield, the model was trained and predicted on a log scale222This is done because, unlike other outputs, yield varies across multiple orders of magnitude. Since ALE is a variance-based estimate of feature importance, it overestimates the importance of features that distinguish between orders of magnitude in output. Running the RF with yield on a log scale mitigates this effect., but points are plotted here at their original scale.) Prediction quality is high across the board, achieving values close to 1 on training data and in the 0.7-0.9 range on test data. Interestingly, the model’s predictive quality is particularly high when predicting BT. As an output, BT closely reflects a series of key design changes at the NIF (see Fig. 1). The original low-foot designs had bang times in the range of 20 ns, while the newer high foot and high-density carbon (HDC) designs have bang times of approximately 12-14 ns and 8 ns, respectively. For each key design change, yield and implosion velocity have increased while BT has decreased. However, this correlation does not fully explain why the model is able to make such accurate predictions on BT in particular.
The model systematically under-predicts for high experimental values and over-predicts for low ones. This effect may be due to a relative lack of these low and high points in the dataset, as RFs are poor at extrapolating trends for data that they haven’t seen during training. However, the bias is visible in the training data as well as the test data, suggesting that the given feature space may lack key design features (such as capsule surface quality or mixing between the pusher and the hot and cold fuels) needed to distinguish medium values of yield, velocity, etc. from very high or low ones.
The ratio of model error to experimental uncertainty—calculated as where are model predictions, are observed experimental values and are reported experimental uncertainties for —are shown in Fig. 4. The low percentage of points with for total yield and BT, despite high predictive performance on these values (BT in particular), suggests that the experimental errors reported in the data for total yield and for BT may be overly conservative. For and velocity, the number of points that fall below the line is very high because the reported uncertainties for velocity and are larger than those reported for total yield. (Velocity and have average reported experimental percent errors of and , respectively, while the average reported experimental percent errors for total yield and BT are and .)
Iv-B Importance Results
We aggregate ALE importance rankings for all outputs by input variable (Fig. 5) and by output variable (Fig. 6). Fig. 6 shows that importance rankings between total yield and velocity are highly correlated. High velocity is typically the product of greater kinetic energy in the implosion piston. As the capsule implodes, this energy is deposited into the fuel, creating higher fuel temperature and greater overall total yield (where yield scales as ). In Fig. 6, we see this correlation in the importance rankings for total yield and velocity, both of which show significant effects from and LEH laser energy, and hohlraum length, among other variables.
Likewise, and BT show correlated importance rankings: both are strongly influenced by cryo layer thickness, trough power, and trough cone fraction. The correlation between and BT is likely due to the fact that the original low-foot ICF designs, which used high gas fill hohlraums and long laser pulses (see Fig. 1), had the largest values of both and BT. As ICF design shifted toward shorter laser pulse shapes, values of both and BT decreased. We hypothesize that the high importance of trough cone fraction in predicting both and BT is due to the fact that trough cone fraction is highly correlated with hohlraum gas fill density (high gas fill shots generally have a longer trough), and gas fill density is an important predictor of implosion performance.
As shown in Fig. 5, the input variables with the greatest total importance across all outputs are trough cone fraction, picket power, trough power, , LEH laser energy, and cryo layer thickness. The high importance of picket power reflects the fact that picket power is crucial for controlling capsule stability during high-speed implosions. Increasing the velocity of implosions allows for greater energy concentration in the hot spot, thus improving performance and yield . However, high-speed implosions typically experience greater instabilities at the ablator surface. Such instabilities, when large enough, reach the hot spot and interfere with neutron reactivity. Increasing the picket power helps reduce such instabilities and prevent them from reaching the hot spot, allowing implosions to be driven stably at higher velocities and thus increasing yield. It is therefore unsurprising that picket power has such high overall importance, particularly in predicting total yield.
The model does not assign high importance to picket power when predicting on velocity. This finding is consistent with the fact that implosion velocity does not directly depend on picket power. Implosion velocity is primarily determined from the rocket equation , where is implosion velocity, is the ablation pressure, is the initial mass of the ablator, and is the final mass at peak velocity. However, although velocity does not depend directly on picket power, it is indirectly correlated with picket power because of the picket’s role in controlling implosion stability. Picket power is used to set fuel adiabat by sending small shocks into the ablator, which makes the fuel less compressible but also reduces the effects of hydrodynamic instabilities that can interfere with implosion performance. The reduced fuel compressibility in high picket power/high adiabat implosions may have some impact on implosion velocity (capsules with less compressible fuel will generally implode more slowly); however, this effect can, in principle, be mitigated by increasing the laser power. Conversely, implosions that have lower values of picket power are more likely to be unstable, which can also result in higher values of and smaller values of . As such, picket power does have some physical effect on implosion velocity; however, velocity depends principally on other factors such as ablator mass.
The high overall importance of is likely due to the implosion shape of the first 70 shots (group I). The high density hohlraum gas fill present in these shots causes laser plasma instabilities that make the implosion shape hard to control, causing some of the laser light to scatter back out of the hohlraum and thus reducing implosion yield. The wavelength difference is used to control the symmetry of high gas fill implosions by controlling the transfer of energy from the outer cone beams to the inner cone beams; however, it can also drive greater backscatter from the inner beams, leading to less overall coupled energy to the target and worse implosion performance overall. The variation in implosion stability, symmetry, and laser backscatter from shot to shot may therefore increase the importance of when predicting on high gas fill shots, a phenomenon that is not present for the low gas fill shots. Similarly, we hypothesize that the high overall importance of trough cone fraction may be due to the correlation between trough cone fraction and hohlraum gas fill, as high gas fill shots generally have a longer trough. We further analyze the discrepancy in feature importance results between high and low gas fill shots in Section V-B.
V Individual Analysis of Group I and II Shots
V-a Prediction Quality
Fig. 7 displays aggregated train-test results for high (group I) and low (group II) density shots across all four output variables. Model performance results on all four output variables are summarized in Table II. Again, model performance is generally high, with values close to 1 on training data and in the range of 0.7 to 0.9 on test data. Predictions on total yield and are slightly higher for the low gas fill shots, while predictions on velocity are slightly higher for the high gas fill shots; however, these differences are extremely slight, and model performance overall is near-equal on both groups. For BT, model performance is worse when predicting on groups I and II individually than when predicting on the dataset as a whole, although prediction quality is still high across the board. As when both groups are analyzed together, the model tends to over-predict low values and under-predict high values for all four outputs studied. This pattern is present across training and test data for both low and high density shot groups.
Fig. 8 displays model error-uncertainty ratio for total yield, , and BT on training and test data for low and high density shots. Again, the majority of data points for total yield have and the majority of data points for have . For BT, training data points are split relatively evenly above and below the line, while a majority of test data points have . For total yield and , the low gas fill data tends to have slightly more points with than does the high gas fill data, while the opposite is true of velocity. This is consistent with the fact that the model is better on low gas fill data for total yield and and better on high gas fill data for velocity, although the difference in performance between the two groups is very small.
V-B Importance Results
Fig. 9 shows importance results for high and low density shots, aggregated by input variable, while Fig. 10 shows the same results aggregated by output variable. Both figures show significant differences in variable importance rankings between the two shot groups. From Fig. 9, we see that LEH laser energy and trough power is of high importance to both groups, but that each group is otherwise principally affected by a very different set of inputs.
Apart from LEH laser energy and trough power, the most important inputs overall for the high gas fill shots are , picket cone fraction, number of pulse steps, picket power, and toe length. In contrast, the low gas fill shots are principally affected by LEH peak power, hohlraum length, trough cone fraction, ablator thickness, and hohlraum gas fill. Notably, the parameter drop from being the second-most important predictor of high gas fill shots to zero importance for the low gas fill shots. This result is consistent with the fact that for high gas fill shots, the wavelength difference varies greatly between shots due to laser plasma instabilities caused by the gas fill. When low density hohlraum gas fill is used, these instabilities are reduced, making more consistent between shots and thus reducing its predictive importance.
Trough cone fraction, the most important variable when predicting on the dataset as a whole, drops in importance for group I, but remains a significant predictor of group II shot performance. This is likely because trough cone fraction has a much stronger effect on pulse shape for low gas fill shots, as the trough cone fraction determines how much power can pass by the waist of the capsule before the hohlraum expands to block the lasers from reaching that region (the principal function of the gas fill in early hohlraum designs was to prevent the hohlraum from expanding and blocking the lasers in this manner).
Picket power, the second-most important variable when making predictions on the dataset as a whole, also drops in importance for both groups individually, particularly for group II. This may be due to the fact that the shift toward low gas fill hohlraums was accompanied by a shift toward more stable big-foot implosions, potentially reducing the importance of picket power in setting the fuel adiabat. The largest change in picket power occurred between the low-foot and high-foot campaigns, which both used high gas fills; for this reason, the importance of picket power is higher for group I than for group II. Although there was also a shift in picket power between the HDC and big-foot campaigns (both of which used low gas fills), it was not as significant, resulting in a lower importance ranking for picket power among the low gas fill shots.
We note in Figure 9 that the LEH peak power is ranked as notably more important for the low gas fill experiments as compared to the high gas fill. Potentially related, the LEH laser energy is of notably higher importance in high than low. One possible explanation for this is that the LEH peak power and LEH laser energy are strongly correlated experimentally. As such, the RF predictor can utilize them similarly to capture the same relationship with the yield. Extending the approach in this paper to account for these design-related correlations will be investigated in future work.
From Fig. 10, we see that for both high and low density shots, the importance rankings for total yield velocity are still correlated to some extent, particularly for the low density shots. For the high gas fill shots, importance rankings between the yield outputs and velocity are similar except for the fact that the yield outputs are affected by a greater number of inputs, while velocity is dominated by LEH laser energy and . The strong effect of on yield outputs and velocity disappears for the low gas fill shots. For low gas fill shots, yield and velocity are mainly affected by LEH peak power, LEH laser energy, hohlraum length, and trough cone fraction. Total yield also shows a lesser, but still significant, effect from picket power, while velocity is strongly affected by ablator thickness and capsule or cryo layer thickness.
Although the importance rankings for and BT are highly correlated when high and low density shots are analyzed together, this correlation disappears when the two shot groups are analyzed separately. For the low density shots in particular, the importance rankings for are more heavily correlated with total yield and velocity than they are with BT. As discussed in Section IV-B, the correlation of and BT when the dataset is analyzed as a whole is likely due to the correlation of both variables with hohlraum gas fill density. When the data is pre-split by gas fill density, the model may no longer be able to detect this relationship in the data.
Vi Discussion and Future Work
At the outset, we intended to identify the physics mechanisms driving ICF implosion dynamics using RFs. However, the sensitivity of the data to physics mechanisms is strongly confounded with the impact of design changes. Due to the nature of experimental design at the NIF, design evolution followed design changes with the most significant impact. As such, data outputs followed the same design change evolution. We determined that purely data-driven assessment of the importance of design quantities does not allow discrimination between important physical mechanisms confounded in design changes. As shown in the paper, the importance of design changes are still beneficial. However, if it is desired to better understand the dominant physics mechanisms in an experiment, the confounding due to the impact of design changes must be considered. This can be done through analysis of the statistical design of experiments and incorporating physical knowledge in causal inference. Other directions for future work include using ML to perform a deeper analysis of relationships between correlated design inputs, to analyze discrepancy between simulations and experiments, and to predict optimal design configurations for ICF simulations.
Although its use in the field is relatively new, ML provides a promising method of ICF analysis. In this work, we show that RFs are able to learn and predict on data from ICF experiments with high accuracy, achieving scores of 0.9+ on training data and 0.7+ on unseen test data. The model’s performance on data from high and low gas fill density shots does not differ significantly from its performance on the dataset analyzed as a whole. The model’s predictions show some bias toward the mean across all outputs and shot groups studied, suggesting that there may be factors missing from the input feature space that affect experimental results.
Many of the feature importance results detected by our model are consistent with known physics and are reflective of key design changes that took place between experiments. The model’s ability to detect shifts in hohlraum gas fill density, capsule design, and other significant changes that took place between the high-foot, low-foot, HDC, and big-foot campaigns indicates that it is able to accurately identify which design inputs exert the greatest influence on experimental outputs, providing importance results that are consistent with the physics of ICF implosions. Such ML-based importance results may provide greater insight into input-output relationships as well as the effects of key ICF experimental design changes on outputs of interests, potentially informing the design of future ICF experiments and simulations.
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). We would like to thank Dr. Otto Landen for the use of his NIF experimental database as the source for this work. Approved for public release under LA-UR-20-27991.
-  R. Betti and O. A. Hurricane, “Inertial-confinement fusion with lasers,” Nature Physics, vol. 12, no. 5, pp. 435–448, May 2016, number: 5 Publisher: Nature Publishing Group. [Online]. Available: https://www.nature.com/articles/nphys3736
-  K. D. Humbird, J. L. Peterson, B. K. Spears, and R. G. McClarren, “Transfer Learning to Model Inertial Confinement Fusion Experiments,” IEEE Transactions on Plasma Science, vol. 48, no. 1, pp. 61–70, Jan. 2020, conference Name: IEEE Transactions on Plasma Science.
-  J. L. Peterson, K. D. Humbird, J. E. Field, S. T. Brandon, S. H. Langer, R. C. Nora, B. K. Spears, and P. T. Springer, “Zonal flow generation in inertial confinement fusion implosions,” Physics of Plasmas, vol. 24, no. 3, p. 032702, Mar. 2017, publisher: American Institute of Physics. [Online]. Available: https://aip.scitation.org/doi/full/10.1063/1.4977912
-  A. Hsu, B. Cheng, and P. A. Bradley, “Analysis of NIF scaling using physics informed machine learning,” Physics of Plasmas, vol. 27, no. 1, p. 012703, Jan. 2020, publisher: American Institute of Physics. [Online]. Available: https://aip.scitation.org/doi/10.1063/1.5130585
-  L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 2001. [Online]. Available: https://doi.org/10.1023/A:1010933404324
-  B. d. Ville, “Decision trees,” WIREs Computational Statistics, vol. 5, no. 6, pp. 448–455, 2013, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/wics.1278. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.1278
-  T. Hastie, R. Tibshirani, and J. Friedman, Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York, NETHERLANDS, THE: Springer, 2009. [Online]. Available: http://ebookcentral.proquest.com/lib/asulib-ebooks/detail.action?docID=437866
-  T. K. Ho, “Random decision forests,” in Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 1, Aug. 1995, pp. 278–282 vol.1, iSSN: null.
-  D. W. Apley and J. Zhu, “Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models,” arXiv:1612.08468 [stat], Aug. 2019, arXiv: 1612.08468. [Online]. Available: http://arxiv.org/abs/1612.08468
-  T. Miller, “Explanation in Artificial Intelligence: Insights from the Social Sciences,” arXiv:1706.07269 [cs], Aug. 2018, arXiv: 1706.07269. [Online]. Available: http://arxiv.org/abs/1706.07269
-  B. Kim, R. Khanna, and O. O. Koyejo, “Examples are not enough, learn to criticize! Criticism for Interpretability,” in Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 2280–2288. [Online]. Available: http://papers.nips.cc/paper/6300-examples-are-not-enough-learn-to-criticize-criticism-for-interpretability.pdf
-  F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” arXiv:1702.08608 [cs, stat], Mar. 2017, arXiv: 1702.08608. [Online]. Available: http://arxiv.org/abs/1702.08608
-  A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Tarantola, Global Sensitivity Analysis: The Primer. John Wiley & Sons, Feb. 2008, google-Books-ID: wAssmt2vumgC.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, no. 85, pp. 2825–2830, 2011. [Online]. Available: http://jmlr.org/papers/v12/pedregosa11a.html
-  O. A. Hurricane, P. T. Springer, P. K. Patel, D. A. Callahan, K. Baker, D. T. Casey, L. Divol, T. Döppner, D. E. Hinkel, M. Hohenberger, L. F. Berzak Hopkins, C. Jarrott, A. Kritcher, S. Le Pape, S. Maclaren, L. Masse, A. Pak, J. Ralph, C. Thomas, P. Volegov, and A. Zylstra, “Approaching a burning plasma on the NIF,” Physics of Plasmas, vol. 26, no. 5, p. 052704, May 2019, publisher: American Institute of Physics. [Online]. Available: https://aip.scitation.org/doi/10.1063/1.5087256