Studies in computational plasma physics aim to explain experimental results and confirm theory. Plasma theory generally takes either the kinetic or fluid approach to modeling plasma particles.
The first-principles description of a plasma is kinetic. Kinetic theory describes the plasma as a six dimensional phase space probability distribution for each particle species. Kinetic theory makes no assumptions regarding thermal equilibrium and thus may result in multi-modal arbitrary distributions.
In the fluid approach, it is assumed that the details of the distribution functions can be neglected and a given fluid parcel can be described by just its density, momentum and temperature. Fluid models are generally derived by marginalizing out the velocity dependence of the fully kinetic description.
Certain regimes of space and laboratory plasmas must be simulated using kinetic models, which capture all the relevant physics but are computationally more expensive. Two numerical approaches are typically used: Continuum (Vlasov) solvers and Particle-in-Cell (PIC) methods. PIC codes are an example of a fully kinetic (six dimensional) solver. PIC codes discretize the distribution function according to the vlasov equation and then sub-sample to represent regions of plasma as macroparticles. These macroparticles are advanced by fields defined by the electromagnetic Maxwell equations. This method is computationally more tractable than a direct continuum solver, however it unfortunately introduces two sources of intrinsic noise.
The first is systemic noise inherent to the discrete plasma representation and the mapping between a discrete mesh and continuous particle positions 
. While the particle’s position is represented by continuous 3D space, it must be mapped to a 3D discrete mesh where the fields live in order to interpolate the field values and advance the particle’s momentum and position.
The second source of noise is introduced in recovering the particle distribution functions from the output of the simulation. This is known as likelihood-free inference or simulation based inference. The samples, or particles in this case, are data generated by advancing the simulation through some number of time-steps. The simulation with parameters , represents some implicit and unknown likelihood function . Traditionally this likelihood was recovered by binning the particles into histograms. Because the simulation must make compromises on the number of particles and other numerical constants encapsulated by for tractability reasons, the resulting likelihoods tend to contain noise.
For the remainder of this work we will use the terms particle distribution function and likelihood interchangeably.
A possible method of de-noising the likelihood lies in generative modeling. Generative modeling has shown great success in de-noising and super resolution tasks[21, 1, 2, 3, 4]. Generating an accurate de-noised distribution function from PIC codes which encapsulates the underlying physics and matches the results predicted by continuum codes, would introduce a reliable method for cross-code validation as well as cut costs by allowing for inference on commodity hardware.
In this work we aim to motivate the use of robust generative modeling techniques as a novel solution to the noise inherent to the distribution functions produced by PIC methods. We will apply techniques from generative modeling to de-noise our non-gaussian data, performing likelihood-free inference without violating the physical constraints of the fully kinetic model. We will then demonstrate that this technique may be expanded to encapsulate temporal dynamics. These experiments will be used as motivation for future core-edge coupling studies mapping distributions generated from PIC codes to distributions solved by continuum codes.
Ii-a Particle Distribution Function
The baseline particle distribution function (PDF) is seven dimensional, three spatial and three velocity components plus time per ion species,
Normally, for analysis we look at a sub-domain region of the simulation to study plasma evolution. This amounts to marginalizing the distribution function over space and taking specific time slices resulting in a multivariate gaussian where the plasma bulk flow parameterizes the means and the temperature parameterizes the covariance. This is known as a Maxwellian distribution
It is important to note that this only holds true for an idealized plasma in thermal-equilibrium. As the domain evolves through the course of a simulation various processes will cause a departure from the Maxwellian form. The resulting PDF will be of arbitrary form and temporally dynamic, making the task of modeling density/data-driven likelihoods particularly difficult.
Ii-B Generative Modeling
A generative model’s aim is to represent a probability distribution in a tractable fashion such that it is capable of generating new samples. Concretely, given a datapoint, can we learn an approximation to the true distribution such that we may generate new samples. The likelihood of the generated samples should closely match the likelihood of the data used to train the model. We refer to likelihoods learned from data as data-driven likelihoods (DDL). Our data was produced by a simulation with predefined parameters which represents the implicit likelihood, so we can say we want to find the DDL which approximates
Recent advances in machine learning have produced a wide variety of generative techniques. Chief among these are variational auto-encoders (VAE), generative adversarial networks (GAN), and expectation maximization (EM) algorithms.
The VAE is a maximum likelihood estimator that approximates the evidence by maximizing the evidence lower bound. The core problem with this approach lies in the approximation of the posterior. In achieving a closed form solution, one must know a-priori the posterior’s functional form. The standard approach assumes a gaussian, as such it performs poorly on multimodal or non-gaussian data. Alternative posteriors have been proposed in the literature , but these methods still require a-priori knowledge of the posterior’s functional form.
The GAN on the other-hand, doesn’t actually model the likelihood of the data. Its goal is to trick a discriminator into believing the generated samples have been drawn from the true distribution . So while samples generated from the GAN may appear to be reflective of the simulation data, the possibility exists that we are not modeling the true likelihood. Relying on believable but arbitrary samples leaves no guarantee that our inference would respect the physical constraints of the domain in question.
EM algorithms performed on gaussian mixture models do well at modeling multimodal distributions, however it requires prior knowledge of the modality of the data. As we are looking to model our particle distribution functions at an arbitrary time during the evolution of the simulation, the modality is assumed to be dynamic.
Ii-C Normalizing Flows
A normalizing flow describes the transformation of a probability density through a sequence of invertible mappings . Given data , a tractable prior , and a learnable bijective transformation we can apply the following change of variable formula to define a distribution on .
Furthermore, defining to be a composite of a sequence of N bijective mappings, allows us to say
where and . Optimizing on the negative log loss gives us a maximum likelihood model that allows for efficient sampling and density estimation. What remains to be specified is the class of bijective transfomation being used. To make this tractable, we would ideally pick a class which is easily invertible, flexible, and results in a Jacobian with a tractable determinant. For this work we use the Masked Autoregressive Flow (MAF).
The MAF offers a robust procedure for modeling our DDL. As an autoregressive model it aims to construct a conditional probability distribution for each feature, where the distribution is conditioned on all previous features. Assuming normal priors allows us to concisely say:
are arbitrary functions parameterized by neural networks. We may generate new data as follows
To ensure robust predictions we include a permutation of the features before each layer of the flow. This class of transformation, being autoregressive, results in a lower triangular Jacobian. It also easily extends to conditional probabilities. For further details on MAF please see .
We can see that the normalizing flow is convenient not only because it allows us to generate samples in an interpretible manner, but gives direct access to the density, allowing us to solve the likelihood-free inference problem for the particle distribution function. For further details on normalizing flows we refer the reader to [8, 15, 19]
The following experiments were performed with data produced by the Particle Simulation Code (PSC) . Multi-modal and non-gaussian behavior manifests itself in our data due to excitation processes. Particle excitation occurs through the acquisition of energy from an outside source, usually due to magnetic reconnection or collisionless shocks. In this case, our simulation parameters are very nearly described by .
Shown in Fig 1 is the temporal evolution of the data’s
marginalized distribution function (not normalized). We see that from T-4 to T-15 an energization process occurs which drives the multi-modal behavior. Overlayed with the PDF is the normal distribution parameterized by our data’s mean and variance.
To motivate our use of the MAF we first demonstrate that our data is non-gaussian. There are several methods which may be used to demonstrate this, including the20] gives us an established holistic evaluation procedure.
For brevity we focus only on the Kullback-Leibler divergence test. Taking the null hypothesis to be that the our data is gaussian, we generate a normal distribution parameterized by the mean and variance of the data. We draw two separate sample batches from the normal distribution and calculate the KL divergence between the two in order to calculate the null hypothesis. It is well established that the KL divergence between two sample sets drawn from the same distribution will be variable on both the number of samples drawn and number of bins. Taking both numbers to be very large we are able to minimize this variability and achieve the expected minimal distance for the baseline. We then calculate the KL divergence between the data and a sample batch drawn from the normal distribution for comparison. Results in Fig2 show that from T-4 to T-15 the KL divergence of the data is an order of magnitude greater than if the data was normally distributed, disproving the null hypothesis. This tells us that the data is non-gaussian (non-Maxwellian) and that there are excitation processes occurring.
Iii-B Data Driven Likelihood
Having shown the non-gaussianity of the data we can confidently state that the VAE, GAN, and EM algorithm will yield poor DDL. With this in mind we select the MAF as our generative model. Nflows, built and maintained by 
, is a standardized python library built on pytorch which provides a probabilistic machine learning framework. We constructed the MAF using Nflows and trained using negative log likelihood for 1000 epochs. The specific architecture consisted of an 8 layer flow, each layer of which contained a reverse permutation transformation and a masked affine autoregressive transformation. The affine transformations themselves consist of a scale and shift parameter, each of which is represented by a single hidden layer neural network containing 32 nodes. We take our base distribution to be a multivariate normal.
Training the flow using the negative log likelihood allows us to use the Adam optimizer to iteratively update the parameters of our model in an unsupervised manner. The flow is fed simulation data which is transformed and mapped to the base distribution. Each iterative update modifies the flow’s parameters so that the likelihood of the simulation data under the base distribution after transformation is maximized.
|Layers||Permutation||Transfor-mation||hidden nodes||Base Distribution|
|8||Reverse||Masked Affine Autoregressive||32||Multivariate Normal|
Results may be seen in Fig 3 which show the binned distribution function of both the true data and samples generated by the model, with the smooth learned likelihood. Here we see the power of using a normalizing flow as a generative model. By gaining direct access to the density function we are able to work with a smooth approximation to what would otherwise be a noisy distribution. If we were to use this data to analyze kinetic processes we would traditionally use the PDF represented in frame A of Fig 3. This clearly contains noise at a level which could skew interpretation of the underlying physics. In frame C we see the DDL learned by the model, demonstrating a dramatic noise reduction in comparison to frame A.
Iii-C Temporal Evolution
We can leverage the versatility of the normalizing flow by taking our base distribution to be a conditional normal where we condition on simulation time. This allows us to capture the underlying particle information at different times throughout the simulation and encapsulate that in our model. This is powerful in that we no longer need to store terabytes of particle data, we can compress that information into the parameters of our model and perform inference from commodity hardware.
Here we use a single layer neural network with 8 nodes and a ReLU activation to map the simulation time to the conditional parameters of our base distribution. We repeat the same training procedure as the previous section with the exception that we now use the data produced by the simulation at each interval of 1000 time-steps.
In the framework of kinetic theory, we can use the distribution function to directly calculate the conserved physical quantities of our system. The zeroth order moment is the number density, which may be scaled to the mass or charge density. The first order moment gives us momentum and the second order moment kinetic energy.
In Fig 4 we present the absolute percentage error of the zeroth, first, and second moment calculations directly between the raw data and the predictions of our model. As shown, the maximum error for the first 21,000 time-steps is always well below 1%, demonstrating that we have compressed the temporal evolution of our simulation into our generative model without violating the physical constraints of the system.
Iv Discussion and Conclusion
We have shown our data to be non-gaussian and shown that we must be selective in which techniques we use to model it. We have shown that generative modeling with normalizing flows is flexible enough to learn our PDF. By applying the MAF to high dimensional particle data produced in PIC simulations we have successfully learned the DDL of the particles, resulting in a smooth tractable estimate of . The MAF is easily extendable to conditional distributions, which allowed us to encapsulate temporal dynamics into our model and which opens up room for further studies on adaptable sub-domains. Most importantly in modeling our data we made no assumptions as to the physical process taking place within the simulation. Our predictions align with the simulation’s results implying that we have not violated physical constraints in generating new samples.
This presents exciting opportunities for the eXascale Computing Project Whole Device Modeling Application (WDMApp). WDMApp aims to model plasma within the interior of a magnetic confinement fusion device known as a tokomak. Due to the high computational cost, simulations of this nature have historically been restricted to limited volumes of the domain. WDMApp will use the continuum code GENE to model the dense core plasma and a separate, possibly PIC code, to model the less dense edge regions . Domain coherence requires frequent communication of the electromagnetic fields and the particle distribution function between the two codes. The efficient transfer of this information is known as core-edge coupling and involves mapping information between the two code’s disparate representations. Coupling the codes to allow information exchange in a meaningful way is an active area of research [9, 10, 17]. We propose using these results as motivation for further studies incorporating generative modeling into core-edge coupling schema.
-  (2013) Adaptive multi-column deep neural networks with application to robust image denoising. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS’13, Red Hook, NY, USA, pp. 1493–1501. Cited by: §I.
-  (2020) Learning generative models using denoising density estimators. External Links: Cited by: §I.
-  (2020) Generative modeling with denoising auto-encoders and langevin sampling. External Links: Cited by: §I.
Simple sparsification improves sparse denoising autoencoders in denoising highly noisy images. pp. 1469–1477 (English (US)). Note: 30th International Conference on Machine Learning, ICML 2013 ; Conference date: 16-06-2013 Through 21-06-2013 Cited by: §I.
-  (2018) Coupling exascale multiphysics applications: methods and lessons learned. pp. 442–452. External Links: Cited by: §IV.
Deep unsupervised clustering with gaussian mixture variational autoencoders. External Links: Cited by: §II-B.
-  (2015) NICE: non-linear independent components estimation. External Links: Cited by: §II-C.
-  (2017) Density estimation using real nvp. External Links: Cited by: §II-C.
-  (2021-02) Spatial coupling of gyrokinetic simulations, a generalized scheme based on first-principles. Physics of Plasmas 28 (2). External Links: Cited by: §IV.
-  (2018-07) A tight-coupling scheme sharing minimum information across a spatial interface between gyrokinetic turbulence codes. Physics of Plasmas 25 (7), pp. 072308. External Links: Cited by: §IV.
-  nflows: normalizing flows in PyTorch External Links: Cited by: §III-B.
-  (2016) The plasma simulation code: a modern particle-in-cell code with patch-based load-balancing. Journal of Computational Physics 318, pp. 305–326. External Links: Cited by: §I, §III.
-  (2014) Generative adversarial networks. External Links: Cited by: §II-B.
-  (2014) Auto-encoding variational bayes. External Links: Cited by: §II-B.
-  (2018) Glow: generative flow with invertible 1x1 convolutions. External Links: Cited by: §II-C.
-  (2021-02) Kinetic simulations of electron pre-energization by magnetized collisionless shocks in expanding laboratory plasmas. The Astrophysical Journal 908 (2), pp. L52. External Links: Cited by: §III.
-  (2021) First coupled gene–xgc microturbulence simulations. Physics of Plasmas 28 (1), pp. 012303. External Links: Cited by: §IV.
-  (2018) Masked autoregressive flow for density estimation. External Links: Cited by: §II-C.
-  (2016) Variational inference with normalizing flows. External Links: Cited by: §II-C.
-  (2020-11) Flow-based likelihoods for non-gaussian inference. Physical Review D 102 (10). External Links: Cited by: §III-A.
-  (2012) Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems, Vol. 25. External Links: Cited by: §I.