Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation
Simulation is a key component of physics analysis in particle physics and nuclear physics. The most computationally expensive simulation step is the detailed modeling of particle showers inside calorimeters. Full detector simulations are too slow to meet the growing demands resulting from large quantities of data; current fast simulations are not precise enough to serve the entire physics program. Therefore, we introduce CaloGAN, a new fast simulation based on generative adversarial neural networks (GANs). We apply the CaloGAN to model electromagnetic showers in a longitudinally segmented calorimeter. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future. In particular, the CaloGAN achieves speedup factors comparable to or better than existing fast simulation techniques on CPU (100×-1000×) and even faster on GPU (up to ∼10^5×)) and has the capability of faithfully reproducing many aspects of key shower shape variables for a variety of particle types.READ FULL TEXT VIEW PDF
Accurate and fast simulation of particle physics processes is crucial fo...
The precise simulation of particle transport through detectors remains a...
Using detailed simulations of calorimeter showers as training data, we
High energy physics experiments rely heavily on the detailed detector
Using generative adversarial networks (GANs), we investigate the possibi...
Simulation is one of the key components in high energy physics. Historic...
Bringing a high-dimensional dataset into science-ready shape is a formid...
Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation
High-precision modeling of the interactions of particles with media is important across many physical sciences, enabling and accelerating new findings. Similar to complex weather or cosmological modeling, the detailed simulation of subatomic particle collisions and interactions, as captured by detectors at the LHC, is a computationally demanding task, which annually requires billions of CPU hours, constituting more than half of the LHC experiments’ computing resources Flynn (2015); Karavakis et al. (2014); Bozzi (2015).
The Nobel-prize-winning Higgs boson discovery Aad et al. (2012); Chatrchyan et al. (2012) would not have been possible without extensive simulation. Before its experimental observation, its fundamental properties, such as its mass, were unknown, but synthetic particle collisions could be generated to simulate the outcome of various measurements under different model assumptions.
Today, as several questions remain unanswered about the nature of known particles (such as neutrinos) and hypothetical ones (such as the supersymmetric partners of the Standard Model particles), modern nuclear and particle physics research continues to strongly depend on detailed simulations for developing analysis techniques, interpreting results, and designing new experiments.
Cutting-edge software libraries such as Geant4 GEANT4 Collaboration (2003) provide the backbone to construct complex detector geometries and accurately model physical processes and interactions happening at distance scales as small as m.
The shortcoming of this method is its computational footprint. The high-precision description of electromagnetic and nuclear processes that govern the evolution of particle showers in calorimeters can requires minutes per event on modern computing platforms Aad et al. (2010); Rahmat et al. (2012), making this the most computationally expensive step in the simulation pipeline. Due to the expensive simulation cost, significant resources are also invested in storing generated data sets, which can occupy petabytes of disk space.
This bottleneck becomes apparent at the scale at which events need to be simulated to enable physics analyses at the high luminosity phase of the LHC (HL-LHC). The ATLAS and CMS experiments are expected to observe about Higgs boson events de Florian et al. (2016), buried in background events Aaboud et al. (2016); CMS (2016). Hundreds of billions of simulated collisions will be required to reduce the Monte Carlo uncertainty and measure some of the Higgs boson’s as yet unprobed properties.
Approximate calorimeter simulation techniques exist Grindhammer and Peters (1993); Beckingham et al. (2010); Grindhammer et al. (1990); Barberio et al. (2009), but they provide compromises that lie on different, yet similarly sub-optimal, parts of the accuracy-speedup trade-off curve.
Full detector simulations are too slow to meet the growing analysis demands; current fast simulations are not precise enough to serve the entire physics program. We therefore introduce a Deep Learning model, namedCaloGAN, for high-fidelity fast simulation of particle showers in electromagnetic calorimeters. Its goal is to be both quick and precise, by significantly reducing the accuracy cost incurred with increased speed-up. A fast simulation technique of this kind also addresses the issue of data storage and transfer, as the gained generation simplicity and speedup make real-time, on-demand simulation a possibility.
Similar techniques have been tested in Cosmology Ravanbakhsh et al. (2016); Schawinski et al. (2017), Condensed Matter Physics Mosser et al. (2017), and Oncology Kadurin et al. (2016). However, the sparsity, high dynamic range, and highly location-dependent features present in this application make it uniquely challenging. In addition to enabling physics analysis at the LHC, an approach similar to the CaloGAN may be useful for other applications in particle and nuclear physics, nuclear medicine, and space science that require detailed modeling of particle interactions with matter.
To alleviate the computational burden of simulating electromagnetic showers, we introduce a method based on Generative Adversarial Networks (GANs) Goodfellow et al. (2014) in order to directly simulate component read-outs in electromagnetic calorimeters. GANs are an increasingly popular approach to learning a generative model using deep neural networks, and have shown great promise in generating clear samples from natural images Radford et al. (2015).
Though the GAN formulation, by design, does not admit an explicit probability density or explicit likelihood, we gain the ability to sample from the learned generative model in a efficient manner. The GAN training uses a minimax game theoretic framework, and admits a functionas an artifact that maps a
-dimensional latent vector,to a point in the space of realistic samples. We would like the implicit density learned by to be close to the distribution that governs the simulated data distribution. Since is a neural network, a forward pass to generate new samples is highly efficient on modern computing platforms Chetlur et al. (2014).
Previous work de Oliveira et al. (2017) investigated GAN-based methods for jet images Cogan et al. (2015), which are similar to one-layer calorimeters with square pixels (except jet generatators such as Pythia Sjostrand et al. (2006) are much faster than Geant4). This work addresses the complexity introduced by modeling a realistic sampling detector with heterogeneous longitudinal and transverse segmentation. We exploit the location specificity of the calorimeter, and utilize weight locality at the model level. We also follow the guidelines outlined in de Oliveira et al. (2017) in order to deal with both high dynamic range and sparsity levels. Our neural network architecture per calorimeter layer is a function of the read-out grid dimensionality, and is augmented with an attentional component Xu et al. (2015) that provides a mechanism to carry information from layer to layer Zhang et al. (2016). This allows the CaloGAN to model the physical sequential dependence among the calorimeter layers.
To ensure the realism of the CaloGAN setup, we impose an additional constraint to encourage the generator to produce a given energy shower. That is, the learned, implicit PDF needs to converge to the hypothetical data generating function for any initial nominal energy , i.e., that for all .
To encourage this to be well modeled, a physics-specific loss component is introduced to penalize absolute deviation between the nominal energy and the reconstructed energy . A noteworthy subtlety is that this penalization scheme, coupled with minibatch discrimination Salimans et al. (2016), invites the network to learn the distribution of , a desirable characteristic for a readily applicable practical system to augment fast simulation. Such a formulation also encourages conservation of energy through the generation process. The simulation only includes models of energy deposition, not digitization (a non-linear effect that can violate reconstructed energy conservation). The energy per layer includes the contribution from inactive material (see below). Therefore, aside from leakage beyond the calorimeter (relevant mostly for charged pions), energy must be conserved and provides a useful constraint on the generation.
From a series of simulated showers, the CaloGAN is tasked with learning the simulated data distributions of , , and generated by Geant4 with uniform energy spectrum GeV, and incident perpendicular to the center of a three-layer, heterogeneously segmented, liquid argon (LAr) calorimeter cube of side-length mm. The training dataset Nachman et al. (2017a) is represented in image format by three figures of dimensions , , and , each representing the shower energy depositions per pixel in each calorimeter layer. The energy per layer includes the active and inactive contributions. For e.g. calorimeter calibrations Aad et al. (2017), it is important to have the inactive component; in the future one could add separate layers for the inactive component or add a second step for dividing the energy per layer into the two components. The flexible CaloGAN architecture allows for a straightforward extension to related detector geometries that have more sampling layers or different cell sizes per layer Nachman et al. (2017b).
Our analysis establishes that it is possible to generate three-dimensional electromagnetic showers in a multi-layer sampling LAr calorimeter with uneven spatial segmentation, while attempting to preserve spatio-temporal relationships among layers.
For performance evaluation, we choose application-driven methods focused on sample quality. A first qualitative assessment is accompanied by a quantitative evaluation based on physics-driven similarity metrics. The choice reflects the domain specific procedure for Monte Carlo-data comparisons. However, it is also important to examine high-dimensional behavior because CaloGAN
is not anchored by parameterized models the way traditional fast simulators are. While the adversarial classifier provides some high-dimensional validation, we also use particle classification performance. Visualization and validation is still a key challenge for multi-dimensional generators parameterized by a neural network.
The average calorimeter deposition per voxel (Fig. 1) suggests that the learned generative models of , , and showers capture aspects of the underlying physical processes. For photon showers, for instance, the mean per-layer cell variations only show a and discrepancy in the first two layers where most energy is deposited for . This level of agreement is promising, but it is important to analyze more than the mean energy pattern to fully study the strengths and weaknesses of the proposed approach.
The CaloGAN-generated samples are checked for adequate diversity and lack of direct memorization of the Geant4 samples used for training. The nearest (by Euclidean distance) Geant4 image is found for each of a random selection of CaloGAN images in order to verify the desired characteristics (Fig. 2). The samples show strong inter- and intra-class diversity and no evidence of memorization since the closest images do not look exactly the same.
Geometrically and physically motivated shower shape variables Olive et al. (2014) are used as further validation and introspection into the capabilities of the CaloGAN to adequately model and capture non-linear functional representations of the simulated data distribution (Fig. 3). In fact, it is desirable for the CaloGAN to recover the target distribution of these 1D statistics.
The network is not shown any shower shape variable (only pixel values) at training time - therefore, it is encouraging to note that the CaloGAN recovers the simulated data distribution for a variety of shower shapes across the three particle types. However, certain features of some distributions are not well-described. This is a challenge for the future and will likely require improvements to the architecture and training procedure. Longer trainings of higher capacity architectures have shown promise in rectifying some of these issues.
Examining 1D statistics does not probe correlations between shower shapes or higher dimensional aspects of the probability distribution. One way to examine the full shower phase space is to study classification performance, as described in the next section.
When training a six-layer, fully-connected classification model on the 504-dimensional pixel space of the concatenated representation of shower energy depositions across all calorimeter layers, no major classification degradation is observed for out-of-domain learning when trained on the full simulation, i.e. when the network is trained on Geant4 samples but evaluated on CaloGAN samples. Specifically, although the classification accuracy always reaches 99% when evaluating performance on CaloGAN showers – which points to an over-differentiation among particle types in the CaloGAN dataset – in both and discrimination tasks, the evaluation of the network trained on Geant4 images results in no accuracy decrease in the former task (), and only a 2% decrease in the latter ( versus accuracy), when compared to the classifier tested on CaloGAN samples. The stability of the accuracy metric implies that the CaloGAN succeeds at representing at least as much variation among showers initiated by different particles as it is necessary to classify them using the same features in Geant4. Training on CaloGAN and testing on Geant4 does show significant degradation, indicating that the GAN is inventing new class-dependent features or underrepresenting class-independent features. While percent-level variations may be important for some applications, using classification as a generator diagnostic is an important tool for exposing the modeling of interclass shower variations.
Directly generating deposited energy per calorimeter cell rather than particle dynamics renders the model’s time-complexity invariant to nominal energy, whereas Geant4 shower simulation runtime increases significantly with higher energy. Therefore, the CaloGAN affords sizable simulation-time speed ups compared to Geant4. All benchmarks are performed on Intel Xeon® 2.6GHz processors for CPU-time and a single NVIDIA® K80 for GPU-time. When simulating a single in a uniform energy range between 1 GeV and 100 GeV, CaloGAN is times faster than Geant4 on both CPU and GPU. However, when batching is utilized, the CaloGAN throughput significantly improves – when batching of size 1024 is allowed (not unrealistic given the embarrassingly parallel nature of EM showering), the per- generation time is times faster on CPU and times faster on GPU.
This letter demonstrates that the Generative Adversarial Network technology represents a powerful new tool for efficient simulation. Our ability to infuse Physics domain knowledge into the neural network documents the flexibility and extensibility of the method for field-specific applications and explicit mismodeling mitigation.
Prior to this work, the prospect of a GAN-based calorimeter simulation had generated considerable excitement within the high energy physics community. The availability and performance of the CaloGAN has attracted further interest as a concrete and publically available demonstration of the power and drawbacks of a GAN-based calorimeter simulation. In addition to the applicability within individual experiments, variations of the CaloGAN are also being studied as a generic tool for future Geant software versions. While the CaloGAN is currently structured as a fast simulation tool, in the future it could also be trained on testbeam data to replace or augment a full simulation tool.
Future work will focus on incorporating the most recent cutting-edge innovations from the GAN literature to stabilize the training procedure and improve convergence to optimal solutions Gulrajani et al. (2017); Nowozin et al. (2016); Heusel et al. (2017); Berthelot et al. . While our primary effort will be to improve and maintain this technique for event simulation at the LHC, this neural-network approach retains generalization power to other fields in which computationally expensive simulation inhibits result productivity.
This work was supported in part by the Office of High Energy Physics of the U.S. Department of Energy under contracts DE-AC02-05CH11231 and DE-FG02-92ER40704. The authors would like to thank Wahid Bhimji, Zach Marshall, Mustafa Mustafa, and Prabhat, for helpful conversations.
, Proceedings of Machine Learning Research, Vol. 37 (PMLR, Lille, France, 2015) pp. 2048–2057.