MINERvA (Main Injuector Experiment for v-A)  is a leading-edge program at Fermi National Accelerator Laboratory. The primary focus of the MINERvA experiment is to understand neutrino properties and reactions. Neutrinos are subatomic particles that rarely interact with normal matter as they only interact via weak subatomic force and gravity and they have extremely small mass. The study of neutrinos may help physicists understand the matter-antimatter imbalance in the universe . However, understanding their interactions with nuclear matter poses significant challenges: they probe aspects of nuclear structure that are not accessible with electrons, photons, or protons .
In the MINERvA experiment, the detector is exposed to the Neutrinos at the Main Injector (NuMI) neutrino beam . The detector records both energy and timing information that can be used to determine where neutrino-nucleus interaction events occur. Precise determination of the interaction vertex, also known as vertex reconstruction , is required to identify the target nucleus in MINERvA.
Fig.1 illustrates a simple detector layout. We eliminate details of the detector and the physics measurements obtained, but readers can refer to [1, 5] for details. The core of the detector consists of a series of alternating active and passive target regions along the beam direction. The passive targets are solid layers of different materials or their combinations, e.g., carbon, iron, lead, and tanks of liquid helium and water. Note in the datasets considered in this work, the liquid/water target is empty. The active targets are plastic scintillator (a hydrocarbon) modules. Each active module contains a pair of planes with scintillator strips aligned in one of three orientations: X, U or V. Strips in X planes are oriented vertically and U and V strips are oriented relative to X. Each module contains either a U or V plane, followed by an X, such that the pattern is interleaved “UXVXUXVX,” etc. Energy and timing values collected from the detector are mapped to pixel values in an image, which can be used for subsequently vertex reconstruction.
A key issue associated with data from this experiment and scientific data in general is that it is often extremely difficult to obtain labels. For example, there may be only a handful of experts in the world capable of labeling experimental data effectively, and even then, it may be impossible to establish ground-truth labels for the data that multiple experts will agree is correct. As such, much of the scientific data that can be used for training is generated using simulations. For this study, millions of simulated neutrino-nucleus scattering events were created and represented as images. In this case, deep learning approaches to analyze the data can help physicists in quickly interpreting results from the experiments.
The contributions of this work are: (1) the incorporation of both energy and time lattices in one network to boost classification accuracy, (2) the utilization of transfer learning to improve performance on regression of absolute position, and (3) a new network topology to combine three views (X, U, V) and reduce model size.
2 Data Description
The dataset used for training, validation and testing consisted of simulated events. Neutrino-nucleus interactions were simulated using the GENIE Neutrino Monte Carlo Generator , and the propagation of the resulting radiation through the bulk detector was simulated using the Geant4 toolkit . For each event, there is both an energy lattice and a time lattice, each of which consists of three views: an X-view, a U-view, and a V-view. The images from the X-view are pixels, while the others (U-view and V-view) are pixels. Each pixel in the energy lattice gives information about the average energy value over the detection event at that point, while each pixel in the time lattice recorded the timing information in nanoseconds relative to when the interaction is predicted to occur.
There are three scales at which we can attempt to predict the vertex location. The largest scale is a segment. The detector can be split into 11 segments, each of which consists of multiple planes within the detector. Approaching a smaller scale, the detector can be split into each of the planes. Planes are thin, horizontally stacked bundles of active sensors. They are oriented roughly perpendicular to the neutrino beam. Finally, the vertex location can be defined as the absolute measured position (Z) inside the detector.
3 Previous Works
The initial approach to vertex reconstruction for this dataset was to identify linear tracks and calculated the intersection points of multiple tracks as the vertex. This method fails for certain types of events; in particular, it is difficult to identify vertex when tracks are non-linear or differentiate individual tracks when the number of track is great. A previous work has applied deep learning (specifically convolutional neural networks) to the energy lattice of the data (as images) to improve classification accuracy . Another previous work has applied spiking neural networks to the vertex reconstruction problem using the time lattice only , achieving comparable results to the convolutional approach for a single view of the data, which indicated that the timing data includes information relevant to the vertex reconstruction problem as well. It is worth noting that both of these approaches utilize an older version of the dataset that used a reduce input size as compared with the dataset used here. Another work explored the use of Domain Adversarial Neural Networks  for controlling physics modeling bias . In this work, we seek to combine both the energy lattice and time lattice in a convolutional neural network implementation. The neural network model is designed to predict the segment and absolute position (Z) of the neutrino events.
4.1 Model for Segment Classification
To get a network with smaller size and to alleviate the vanishing-gradient problem, inspired by the ResNet and DenseNet , we designed our network for segment classification as shown in Fig.3(a).
A rectangle represents a series of operations. For example, a rectangle labeled as B,C,P means that batch normalization (B), convolution (C) and max pooling (P) are successively applied to the input tensors. All convolutions (C) in the network are configured withkernel_size=3, padding=1 and stride=1
, and a ReLU activation function is applied. The kernel size and stride for the max pooling (P) is 2. The octagon below a rectangle indicates the output tensor size, while an octagon on the flow indicates the concatenated tensor size or the reformed tensor size. The tensor size format is, where is the number of channels, is the height and is the width. For example, the input tensor e,t(u,v,x) has 2 groups (energy and timing) of 3 views (U, V, X), thus 6 channels of 127-by-94 matrix.
A black dot indicates the concatenation of two tensors by channel. There are three blocks (B1, B2 and B3) in the network. Within each block, the input of each rectangle is a concatenation of two tensors: the output of a previous rectangle and a shortcut identical tensor. For each block, we also apply direct-connect from previous blocks. For block B2, the direct-connect is a convolution (C2) with a kernel size and stride of 2. So, the input for B2 is the concatenation of two (24,31,23)-tensors, i.e., a tensor of size (48,31,23). For block B3, two direct-connects (C2 and C4) are used. C4 is a convolution with kernel size and stride of 4. Thus, the input tensor size for block B3 is (72,15,11). Note that there is no activation function for the convolutions of the three direct-connects. For the final classification layer, we reformed the output tensor (24,7,5) to a tensor (840), and a fully-connected layer is employed.
4.2 Model for Z Regression
Because we use the same data for Z-regression as we do for segment classification, it is quite natural to employ a transfer learning approach. Fig.3(b) shows the model used for Z regression. We use a well trained segment classification network, freeze all the convolutional layers, and add two fully-connected layers (840-512-1) for regression.
5.1 Experimentation Details
For the whole dataset ( events), We separated this set into three parts, for testing, for validation, and for training. Each data sample contains three views (X, U, V) for timing and energy, i.e., a total of six views. The size of data for the X view is while the size for U and V views is . We repeated the U and V views on the second axis to get a size of . Then, we concatenate the six views to a obtain a tensor (6,127,94) as an input. The original data was in a float32 format. We first normalized the data by view and converted the data to uint8 format for fast training access. We also calculated the mean (
) and standard deviation () on the training set, and applied whitening on input data; thus, the mean and standard deviation for all views are 0 and 1 respectively.
In the training of the classification network, we use an SGD optimizer. The training takes 20 epochs. The learning rate is 0.1 for the first 10 epochs, 0.01 for the following 5 epochs and 0.001 for the last 5 epochs. SGD is configured with a momentum of 0.8 and a weight decay of 5e-4, and the batch size is 256. Then we trained the regression network for 8 epochs with a learning rate of 0.001. Two NVIDIA TITAN X (Pascal) GPUs are configured in data parallelism for training. The toolkit used is PyTorch.
5.2 Overall Results with Both Energy and Timing Data
|Model Size||14.5MB||-||0.488 MB|
|Training Time||10 hrs||-||2.5hrs|
*These results were created by re-implementing the network from a previous work , and evaluating against updated dataset.
We compare our work with a previous work  in Table 1. For the segmentation classification, our model (shown in Fig.3(a)) achieves an accuracy of 98.09% on testing dataset, 4.00% higher than that in . For the Z regression, the coefficient of determination () of our model (as shown in Fig.3(b)) is 0.9919, higher than the previous work (0.96). Additionally, the model size (the size of the trained model file) and the training time of our model are smaller than those of .
Fig.4(a) shows the heatmap of the confusion matrix of the segment classification with both energy and timing data. We can see our model performs well for almost all segments except Segment 9. We expect that this is due to the imbalance of the training data, in which only 0.47% of the data is labeled as Segment 9. We show the scatter plot of the predicted Z of our regression model with both timing and energy data and true Z in Fig.4(b). The standard deviation of the difference of the predicted Z and the true Z is 115.61 mm.
5.3 Classification and Regression with Only Timing Data and Only Energy Data
While we have both timing and energy data, we are also interested in classification and regression with only timing or energy data. In previous work , only energy data was used.
For segment classification with only timing or energy data, we use the same network as shown in Fig.3(a), where the input is a tensor of X, U, V views of timing or energy data, but not both simultaneously as in the previous section. Thus, the input tensor size is (3, 127, 94). Table 2 shows the segment classification accuracy of our model when both timing and energy data were used (combined) and when only timing or energy data was used. As shown in this table, when only timing or energy data was used, the accuracy is slightly degraded. However, it is clear that the energy data contributed more as the accuracy when only energy data was used is only 0.13% less than when both types of data are used.
|Combined||Only Timing||Only Energy|
|Combined||Only Timing||Only Energy|
To get more details about the performance of the model when only timing or energy data is used, we show the heat map of the difference of the confusion matrix when only timing or energy data was used, compared that when both were used in Fig.5. A warmer color square on the diagonal is better, while a warmer color square at other position than on the diagonal means misclassification. Again, we find that energy data contributes more in the segment classification.
Finally, we also compare the Z regression when only timing or energy data was used as shown in Table 3. While the for the three scenarios are almost the same, but for a more accurate Z regression, both timing and energy data should be used.
In this work we present a deep learning approach for vertex reconstruction in neutrino interaction data. We demonstrate state-of-the-art results on this task, presenting a model that achieves higher accuracy on the dataset while also reducing both the training time required as well as the model size. For future work, we plan to explore the utilization of recurrent neural networks for capturing spatial-temporal features from within the event timing.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Robinson Pino, program manager, under contract number DE-AC05-00OR22725.
We would like to thank the MINERvA collaboration for the use of their simulated data and for many useful and stimulating conversations. MINERvA is supported by the Fermi National Accelerator Laboratory under US Department of Energy contract No. DE-AC02-07CH11359 which included the MINERvA construction project. MINERvA construction support was also granted by the United States National Science Foundation under Award PHY-0619727 and by the University of Rochester. Support for participating MINERvA physicists was provided by NSF and DOE (USA), by CAPES and CNPq (Brazil), by CoNaCyT (Mexico), by CONICYT (Chile), by CONCYTEC, DGI-PUCP and IDI/IGIUNI (Peru), and by Latin American Center for Physics (CLAF).
This research was supported in part by an appointment to the Oak Ridge National Laboratory ASTRO Program, sponsored by the U.S. Department of Energy and administered by the Oak Ridge Institute for Science and Education.
-  L. Aliaga, L. Bagby, B. Baldin, A. Baumbaugh, A. Bodek, R. Bradford, W.K. Brooks, D. Boehnlein, S. Boyd, H. Budd, et al., “Design, calibration, and performance of the minerva detector,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 743, pp. 130 – 159, 2014.
-  R Acciarri, MA Acero, M Adamowski, C Adams, P Adamson, S Adhikari, Z Ahmad, CH Albright, T Alion, E Amador, et al., “Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) Conceptual Design Report, Volume 4 The DUNE Detectors at LBNF,” ArXiv e-prints, p. arXiv:1601.02984, Jan. 2016.
-  Ulrich Mosel, “Neutrino Interactions with Nucleons and Nuclei: Importance for Long-Baseline Experiments,” Ann. Rev. Nucl. Part. Sci., vol. 66, pp. 171–195, 2016.
-  P Adamson, K Anderson, M Andrews, R Andrews, I Anghel, D Augustine, A Aurisano, S Avvakumov, DS Ayres, B Baller, et al., “The numi neutrino beam,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 806, pp. 279–306, 2016.
-  Adam M Terwilliger, Gabriel N Perdue, David Isele, Robert M Patton, and Steven R Young, “Vertex reconstruction of neutrino interactions using deep learning,” in Neural Networks (IJCNN), 2017 International Joint Conference on. IEEE, 2017, pp. 2275–2281.
-  Costas Andreopoulos, Christopher Barry, Steve Dytman, Hugh Gallagher, Tomasz Golan, Robert Hatcher, Gabriel Perdue, and Julia Yarba, “The genie neutrino monte carlo generator: Physics and user manual,” arXiv preprint arXiv:1510.05494, 2015.
-  Sea Agostinelli, John Allison, K al Amako, John Apostolakis, H Araujo, P Arce, M Asai, D Axen, S Banerjee, G 2 Barrand, et al., “Geant4-a simulation toolkit,” Nuclear instruments and methods in physics research section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 506, no. 3, pp. 250–303, 2003.
-  Catherine D Schuman, Thomas E Potok, Steven Young, Robert Patton, Gabriel Perdue, Gangotree Chakma, Austin Wyer, and Garrett S Rose, “Neuromorphic computing for temporal scientific data classification,” in Proceedings of the Neuromorphic Computing Symposium. ACM, 2017, p. 2.
-  Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky, “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 2096–2030, Jan. 2016.
GN Perdue, A Ghosh, M Wospakrik, F Akbar, DA Andrade, M Ascencio, L Bellantoni,
A Bercellie, M Betancourt, GFR Caceres Vera, et al.,
“Reducing model bias in a deep learning classifier using domain adversarial neural networks in the minerva experiment,”Journal of Instrumentation, vol. 13, no. 11, pp. P11020, 2018.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in , 2016, pp. 770–778.
-  Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger, “Densely connected convolutional networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017, pp. 2261–2269.