Application of a Convolutional Neural Network for image classification to the analysis of collisions in High Energy Physics

The application of deep learning techniques using convolutional neural networks to the classification of particle collisions in High Energy Physics is explored. An intuitive approach to transform physical variables, like momenta of particles and jets, into a single image that captures the relevant information, is proposed. The idea is tested using a well known deep learning framework on a simulation dataset, including leptonic ttbar events and the corresponding background at 7 TeV from the CMS experiment at LHC, available as Open Data. This initial test shows competitive results when compared to more classical approaches, like those using feedforward neural networks.



There are no comments yet.


page 1

page 2

page 3

page 4


Quantum Convolutional Neural Networks for High Energy Physics Data Analysis

This work presents a quantum convolutional neural network (QCNN) for the...

Nonparametric semisupervised classification for signal detection in high energy physics

Model-independent searches in particle physics aim at completing our kno...

Scaling the training of particle classification on simulated MicroBooNE events to multiple GPUs

Measurements in Liquid Argon Time Projection Chamber (LArTPC) neutrino d...

Learnergy: Energy-based Machine Learners

Throughout the last years, machine learning techniques have been broadly...

Application of neural networks to classification of data of the TUS orbital telescope

We employ neural networks for classification of data of the TUS fluoresc...

Deep learning with photosensor timing information as a background rejection method for the Cherenkov Telescope Array

New deep learning techniques present promising new analysis methods for ...

Improving Parametric Neural Networks for High-Energy Physics (and Beyond)

Signal-background classification is a central problem in High-Energy Phy...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning with convolutional neural networks (CNNs) has revolutionized the world of computer vision and speech recognition over the last few years, yielding unprecedented performance in many machine learning tasks and opening a wide range of possibilities

LeCun2015 .

In this paper, we explore a particular application of CNNs, image classification, in the context of analysis in experimental High Energy Physics (HEP). Many studies in this field, including the search for new particles, require solving difficult signal-versus-background classification problems, hence machine learning approaches are often adopted. For example, Boosted Decision Trees

boosted and Feedforward Neural Networks Kolanoski1996 are much used in this context, but the latest state-of-the-art methods have not yet been fully explored and can bring a new light on the torrent of data being generated by experiments like those at the Large Hadron Collider (LHC) at CERN.

In a first approach we have tested the use of convolutional networks for the classification of collisions at LHC using Open Data Monte Carlo samples. The Compact Muon Solenoid (CMS) experiment cms has been pioneer in the context of the LHC in making public the collision data collected by the detector, opening them to the international community in order to carry out new analyses or to use them for training activities. CMS Open Data is available from the CERN Open Data portal111 and we also have a dedicated portal developed in our center222

In order to apply deep learning techniques conceived for image classification, to the analysis of these collisions, we propose an innovative visual representation of the different physics observables. We train a convolutional neural network on these images, representing simulated proton-proton collisions, to try to distinguish a particular physics process of interest. In our example, we try to discriminate the production of a pair of quarks top anti-top (ttbar) from other processes (background).

2 Deep Learning Techniques for Image Classification

2.1 Deep Learning Architecture

The technique of image classification using CNNs is included in the scope of deep learning. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task specific algorithms. The performance of these processes depends heavily on the representation of the data and on the algorithm used representation_learning .

Following previous successful work in other fields within our group (like plant identification IHeredia ), we have selected as CNN architecture the Residual Network model ResNet

(ResNet) who won the ImageNet Large Scale Visual Recognition Challenge in 2015


The architecture of the ResNet model used consists of a stack of similar (so-called residual) blocks, each block being in turn a stack of convolutional layers. The innovation of this architecture is that the output of a block is also connected with its own input through an identity mapping path. This alleviates the vanishing gradient problem, improving the gradient backward flow in the network and allowing to train much deeper networks. We choose our model to have 50 convolutional layers (aka. ResNet50).

As deep learning framework we use the Lasagne Lasagne

module built on top of Theano

Theano1 Theano2

. We initialize the weights of the model with the pretrained weights on the ImageNet dataset provided in the Lasagne Model Zoo. We train the model for 40 epochs on different top performing GPUs using Adam

Adam as learning rule. During training we apply standard data augmentation (as sheer, translation, mirror, etc), and after applying the transformations we downscale the image to the ResNet standard input size (224×224 pixels)333Code available at

The preprocessing of the samples and the image generation has been done in Python444Code available at The images have been generated extracting the simulated collisions data from a dedicated JSON file containing the main information on the physics observables at play. The JSON has been produced using a C++ framework555Code available at based on a template provided by the Open Data group to which the JSON generation part has been added. An example of the JSON file format used (short.json) together with the instructions to run the code are also found in the repository.

For didactic purposes, we describe in what follows some of the details of the image classification process.

2.2 Image classification process

The overall pipeline in CNNs is similar to standard NNs except for the fact that in this case we feed an image represented by a 3 dimensional tensor of shape 224×224×3 (image height×image width×RGB value). As in most machine learning algorithms, in this workflow we divide the image data in three splits (train|val|test) with roughly (80|10|10) % of the images. As always their respective roles are:

  • Training set
    The data on this set is used to tweak the parameters of the net through an optimization process described below. We define the duration of the training process by the number of times (epochs

    ) we visit the whole training set. It is important to have a balanced dataset (ie. having a comparable number of images in each class) so that the abundance of a class is not a determining factor when classifying new images (ie. always predicting the more abundant class).

  • Val set

    These data are used only as a check of how the trained net would perform on unseen data but they are never used to actually compute the net weights. They are useful for checking that the optimization process is correctly generalizing, not overfitting, and to eventually try different hyperparameters on another iteration of the training process (for example vary the number of layers or the number of training epochs).

  • Test set
    These are holdout data, that one should not use until the very end of the workflow to finally assess the performance of our net. Having these as a separate set from the val set makes sense so as to not overfit the val set by appropiately tweaking the hyperparameters.

Each optimization iteration performed during the training/learning phase consists on two steps:

  • The forward pass

    where we feed an image to the net and compute the upper layer (a vector of length

    where is the number of classes) using a score function where are the weights

    of the function. If we apply the softmax operation to this upper layer vector, each element of this vector can be seen as the probability of the image of belonging to that particular class. Using this vector and our knowledge of the correct class

    we can compute the loss

    , which is a scalar value that measures how far the actual prediction is from the true label. The higher the loss, the worst the prediction, therefore the whole optimization process aims at finding set of optimal weights that lead to a minimum of the loss function. As in most classification problems, here we use the softmax cross-entropy as loss function.

  • The backward pass where we compute the gradient of the loss with respect to the net’s weights using the so called backpropagation algorithm backpropagation . Then we use the computed gradient to update the value of the weights according to a learning rule (in our case Adam Adam ). Because the loss is averaged only over a batch of images, typically composed of tens of images (depending on your GPU memory), and not over the full dataset, we call this algorithm stochastic gradient descent.

The whole pipeline is shown in Figure 1. Once the optimization process is done for a given number of epochs, we freeze the values of the weights and we are ready to perform inference with the network. Due to the fact that during training we have to perform both of these passes, most of the computational effort is carried here, while a test time we only have to perform the forward (inference) pass, being therefore much quicker.

Figure 1: Diagram of the optimization pipeline in the neural network. A given image (described as a tensor of shape 224×224 ×3) of a known predefined class is given as an input to the net. Through a function an -dimensional vector is computed indicating the probability of the image to belong to one of the predefined classes. Using this vector and the true label a scalar value is generated. From there a backward signal is sent, gradually computing the gradients of the weights so as to minimize the value of the loss .

3 Representing Particle Collisions as Images

The main innovation of this work is the way in which the collisions are represented as images. Collisions, also known as events, recorded in a HEP experiment by a detector like CMS CMS-Detector , are described by a set of variables measured corresponding to the particles detected: the momentum of muons, electrons, photons and hadrons produced in the collision of the two accelerated protons, that are determined by the different subdetectors (tracking system, calorimeters, muon system, etc.). Along the global reconstruction of the event, new variables like the definition and momentum of jets are also introduced. The analysis of events uses these sets of variables to discriminate the events corresponding to the physics analysis channel of interest from the background. So the most relevant observables in a collision correspond to the momenta (energy and direction) of the reconstructed particles, and also jets or other global variables, like the missing energy, in the event.

As already stated, when generating the images for classification, the design of the event representation is crucial. All the observables are to be represented using a canvas of dimension 224×224 pixels.

As explained below, in our approach each particle or physics object is represented as a circumference with a radius proportional to its energy, and centered in the canvas at a position corresponding to its momentum direction. The momentum direction use as coordinates the pseudorapidity , related to the polar angle, and the azimuthal angle , which are standard choices in experiments with cylindrical symmetry. Additionally, we associate the color of the circumference to the type of particle or physics object represented.

There are several considerations that have been take into account when proposing this representation, that are briefly discussed in what follows.

3.1 Implementing the representation of physics objects

We have considered the following points to define the transformation of the physics objects into their representation as an image:

  • Resolution
    Each physics object will be represented by a circumference with a radius defined as a function of its energy. As it is drawn using a discrete number of pixels, the scale must be chosen to accomodate the different ranges of energies while preserving as much as possible the resolution in energy.

  • Out of range representation
    When increasing the scale, the low energy objects can be better differentiated but circumferences corresponding to high energy objects could exceed the canvas size causing a misinterpretation. This is the main reason to discard a lineal dependency with the energy.

  • Overlapping
    If the particles have relatively close and values for their momenta directions, the corresponding representations may overlap. This is the main reason to chose circumferences instead of full circles for their representation. One future direction could be looking at full circles with some transparency and see how it compares with the current approach.

The use of a logarithmic scale to transform the energy of the physics object into a radius for the circumference representing it, allows us to reach a balance between the previous factors:


where the value is an effective scale factor that allows us to conciliate the previous points for the collisions being studied, providing the conversion into pixel units.

The center of the circumference, also in pixels units, is obtained using conversion factors along the axis and along the one, corresponding to the ranges for and for .

Figure 2 presents a diagram of this representation for a single particle (a muon). More complex examples will be shown later.

Figure 2: Muon images diagrams. The muons are represented as circumferences with radius proportional to the logarithm of the energy. The horizontal position of the particles corresponds to the pseudorapidity within the range . The vertical position shows the azimuthal angle within the range .

4 Physical variables: test on dimuon objects

After chosing the previous representation, a first basic test was done using dimuon objects, to check if the neural network could separate different invariant masses patterns, a key feature in the reconstruction of collisions.

Anticipating the positive outcome of the test, we remind the direct relationship expected between the invariant mass of the dimuon object and the position and size of the circumferences used to represent them, as the invariant mass of a system with two particles 1 and 2, is


or directly in terms of the variables used to define the representation of the particles as circumferences, the pseudorapidity , the azimuthal angle direction and the transverse momentum


To test the performance of the CNN to discriminate dimuon objects with different invariant mass, we have selected a sample of such objects from real events extracted also from CMS Open Data. The criteria to define such samples is shown in Table 1. Figure 3-3 shows examples of images corresponding to these different dimuon classes.

Class Dimuon object Invariant mass range (GeV/c)
0 None All other mass ranges
Table 1: Criteria used to define different dimuon samples. Each range 1-4 is centered on the mass of a dimuon object corresponding to a dimuon resonance.

Figure 3: Dimuon case. Examples of images, with different invariant masses , belonging to (a) the None class (with GeV/c), (b) a decay (with GeV/c), (c) to a decay (with GeV/c), (d) a decay (with GeV/c) and (e) a decay (with GeV/c). The x-axis depicts the pseudorapidity while the y-axis depicts the azimuthal angle .

4.1 Results on the classification of dimuon objects

The number of images used for training, validation and test for each class is indicated in Table 2. Some of the training images have been clonated to assure a fair balance among the five categories although classes were already roughly balanced. Note that we solve the class imbalance problem by simply replicating images instead of using some clever modification of the loss function, like the weighted cross entropy, where we could have given higher weights in the loss computation to classes with lower abundance, instead of revisiting them, thus leading to a faster training. This dummy approach was taken so as to be able to reuse the code from IHeredia with the least amount of overhead.

Class train set val set test set
0 46584 4250 4307
1 44016 2175 2107
2 46284 127 131
3 45514 2993 3016
4 38109 129 114
Total 220507 9674 9675

Table 2: Information about the different datasets used in the dimuon case. In the case of train set the values are obtained after some minor image clonation to equally populate the different classes.

We have set the training duration to 40 epochs and, as expected, the network accuracy increases as the loss discreases, eventually reaching a plateau, as can be seen in Figure 4.

(a) Training accuracy
(b) Training loss
Figure 4: Accuracy and loss training values for the dimuon case as a function of the number of epochs of the training.

Once the NN weights have been determined, the performance of the network has been evaluated on the test set. Figure 5

show the corresponding the confusion matrix. As it can be seen, the network is capable of distinguishing quite well the images corrresponding to dimuon objects with an invariant mass close to the

boson mass, while the performance to discriminate dimuon objects with similar and lower invariant masses decreases significantly.

(a) Non-normalized confusion matrix
(b) Normalized confusion matrix
Figure 5: Confusion matrices for the test set in the dimuon case using convolutional neural networks.

5 Application to Complex Events

After the encouraging initial test on dimuon objects, we have addressed a second exercise, using events corresponding to simulated collisions at 7 TeV at LHC recorded by the CMS detector CMS-Detector , that have been released as Open Data by the CMS collaboration.

We have chosen as physics channel the production of top quark pair events, where each top quark decays into a W boson and a bottom quark. We want to select collisions where one of the W bosons decays leptonically into a charged lepton, electron or muon, with an associated neutrino. Although complex, these events provide a clear experimental signature, with an isolated lepton with high-transverse momentum, hadronic jets and a large missing transverse energy. We have considered as background processes the production of events where a W boson is produced in association with additional jets ( events) and events corresponding to the so called Drell-Yan processes. The CMS publication webpage666 on top physics results at 7 TeV provides a description of the interest of this physics analysis channel and detailed presentations of the involved processes, methods and results. All three samples dataset-DY dataset-Wjets dataset-tt are obtained from the CMS Open Data portal.

5.1 Event selection

Before starting the learning, we need to make a preselection of the events according to the physics channel of interest.

We will focus on events having one lepton with a transverse momentum greater than 20 GeV fulfilling all the standard quality criteria for isolation and identification. We select jets with a transverse momentum, , greater than 30 GeV and within the angular range defined by 2.4. We apply a b quark tagging discriminant (b-tagging), allowing us to identify (or ”tag”) jets originating from bottom quarks, by using the Combined Secondary Vertex (CSV) which is based on several topological and kinematical secondary vertex related variables as well as information from track impact parameters. We also use and represent in the event images the Missing Transverse Energy (MET).

5.2 Image representation

Leptons and jets are represented as circumferences centered according to their values of and and whith a radius proportional to their transverse momentum, scaled according to the expression


where again is the scale factor allowing all the elements to be represented within the 224×224 pixels canvas.
Each type of particle and jet is drawn with a different color: blue for the electrons, green for the muons, light red for non-btagged jets and dark red for btagged jets.

Additionally, the missing transverse energy is drawn as a black circumference in each collision, moving vertically (according to ), and horizontally centered at . As before its radius scales logaritmically with the absolute value of the MET.

Figures 6(a)-6(c) show sample images corresponding to the different classes of events under study.

(a) Drell-Yan


Figure 6: Examples of images corresponding to the three different classes of collisions being classifed. The x-axis depicts the pseudorapidity while the y-axis depicts the azimuthal angle .

5.3 Results using CNN for classification of complex events

The objective is to be able to differentiate between events, and those corresponding to Drell-Yan and processes.

The CNN is trained using Monte Carlo samples from CMS Open Data, with the statistics indicated in the Table 3. As done previously, we clone some training images to enforce class balance.

Clase train set before train set after val set test set
30809 30809 5000 5000
Drell-Yan 21709 30000 5000 5000
20950 30000 5000 5000
Total 73468 90809 15000 15000
Table 3: Set of images for train set, vat set and test set. In the case of the train set the information shown corresponds to the values before and after the image cloning for enforcing class balance.

The confusion matrix for the test set is shown in Figure 7. Approximately 94% of the preselected ttbar events are correctly classified, while around 5% of the W+jets and 4% of the Drell-Yan events are incorrectly tagged as ttbar. In a signal () and background (Drell-Yan and ) context, with 50/50 splits, the signal vs background discrimation efficiency would be 95,4%.
We have also tried training the network defining only those two categories, signal () and background (Drell-Yan and ). However it results in a slightly worse classification performance with a signalvs background efficiency of 93,6%.

(a) Non-normalized confusion matrix
(b) Normalized confusion matrix
Figure 7: Confusion matrices for the test set in the signal case using convolutional neural networks.

If we want to use the CNN outputs as relevant variables in a physics analysis, the separation of the different background sources will likely result in a better control of systematic uncertainties.

6 Comparison with a Feedforward neural network

The results presented before have been compared with those obtained by using a simpler, more direct, approach like deep feedforward neural networks (FNNs). Recent work has already successfully applied many ideas of the deep learning community to the HEP field Baldi2014 .

Here we use a net of 5 hidden layers with 500 units per layers and standard 50% dropout dropout between layers.

In the case of the classification of dimuon events according to their invariant mass results are shown in Figure 8, where we can see, comparing to Figure 5, that FFNs are much more efficient at classifying all types of events, except for the None class.

(a) Non-normalized confusion matrix
(b) Normalized confusion matrix
Figure 8: Confusion matrices for the test set in the dimuon case using feedforward neural networks.

In the case of vs background classification results are shown in Figure 9. In this case we can see, comparing to Figure 7, that FFNs are better at classifying and (but not Drell-Yan). However, more importantly, we can see that CNNs would outperform FFNs in the signal vs background metric, with a 94,6% efficiency for FFNs.

(a) Non-normalized confusion matrix
(b) Normalized-confusion matrix
Figure 9: Confusion matrices for the test set in the signal case using feedforward neural networks.

The advantages of FFNs compared to CNNs are that the preprocessing time is much shorter (as you only have to prepare a scalar vector of the variables instead of a full 224×224×3 tensor image) and that the training time is much faster (as they are shallower and the computation in between layers is usually much lighter).

The downside of FNNs is their vector representation of variables, which makes handling heteregenous (non fixed-size) data not very intuitive. In this case we handled the various length events by filling the empty parameters with default values. In contrast, in the CNN case adding one more particle to the event just implies drawing one more circle in the image.

7 Conclusions

The preliminary results presented in this study show that the use of Convolutional Neural Networks could be a promising tool to classify collisions in particle physics analysis.

An intuitive visual representation of the events has been proposed that enables the inclusion of the main observables used in high energy physics analysis into an image.

An initial test has shown that using this representation for dimuon objects, a CNN is able to classify them according to their invariant mass.
A second test has been applied to the classification of more complex events, using Open Data describing simulated collisions at LHC at 7 TeV in the CMS detector, and corresponding to three different physics processes, Drell-Yan, and . The test has returned promising initial results, correctly tagging signal and background events with an efficiency around 95%, and comparing slightly favourably with other more direct methods, like standard feedforward NNs.

We plan to extend this work in the future to analyse, among other possibilities, its applicability to the classification of real data, having in mind the problems related to the uncomplete description usually provided by the simulation.

8 Acknowledgements

We would like to show our gratitude to the Data Preservation and Open Access group in CMS for all the valuable support and for providing insight and expertise that greatly assisted the realisation of this work. Ignacio Heredia is funded by the EU Youth Guarantee Initiative (Ministerio de Economia, Industria y Competitividad, Secretaria de Estado de Investigacion, Desarrollo e Innovacion, through the Universidad de Cantabria). Celia Fernández is funded by a collaboration fellowship (Ministerio de Educación, Cultura y Deporte, through the Universidad de Cantabria).


  • (1) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, may 2015.
  • (2) Byron P. Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. Boosted decision trees as an alternative to artificial neural networks for particle identification. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 543(2-3):577–584, may 2005.
  • (3) Hermann Kolanoski. Application of Artificial Neural Networks in Particle Physics, pages 1–14. Springer Berlin Heidelberg, Berlin, Heidelberg, 1996.
  • (4) CMS collaboration. CMS technical design report, volume II: Physics performance. Journal of Physics G: Nuclear and Particle Physics, 34(6), apr 2007.
  • (5) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives, 2012.
  • (6) Ignacio Heredia. Large-scale plant classification with deep neural networks. In Proceedings of the Computing Frontiers Conference, CF’17, pages 259–262, New York, NY, USA, 2017. ACM.
  • (7) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
  • (8) Olga Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  • (9) Sander Dieleman et al. Lasagne: First release., August 2015.
  • (10) James Bergstra et al. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
  • (11) Frédéric Bastien et al. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
  • (12) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014.
  • (13) David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, oct 1986.
  • (14) CMS Collaboration. The CMS experiment at the CERN LHC. Journal of Instrumentation, 3(08):S08004, 2008.
  • (15) CMS Collaboration. Simulated dataset dyjetstoll_tunez2_m-50_7tev-madgraph-tauola in aodsim format for 2011 collision data (sm inclusive), 2016.
    DOI: 10.7483/opendata.cms.txt4.4rrp.
  • (16) CMS Collaboration. Simulated dataset wjetstolnu_tunez2_7tev-madgraph-tauola in aodsim format for 2011 collision data (sm inclusive), 2016.
    DOI: 10.7483/opendata.cms.u7p6.ckvb.
  • (17) CMS Collaboration. Simulated dataset ttjets_tunez2_7tev-madgraph-tauola in aodsim format for 2011 collision data (sm inclusive), 2016.
    DOI: 10.7483/opendata.cms.zbgf.h543.
  • (18) P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature Communications, 5, jul 2014.
  • (19) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.