Novelty Detection Meets Collider Physics

07/26/2018 ∙ by Jan Hajer, et al. ∙ 0

Novelty detection is the machine learning task to recognize data, which belongs to a previously unknown pattern. Complementary to supervised learning, it allows the data to be analyzed in a model-independent way. We demonstrate the potential role of novelty detection in collider analyses using an artificial neural network. Particularly, we introduce a set of density-based novelty evaluators, which can measure the clustering effect of new physics events in the feature space, and hence separate themselves from the traditional density-based ones, which measure isolation. This design enables recognizing new physics events, if any, at a reasonably efficient level. For illustrating its sensitivity performance, we apply novelty detection to the searches for fermionic di-top partner and resonant di-top productions at LHC and for exotic Higgs decays of two specific modes at future e^+e^- collider.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since the early developments in the 1950’s Samuel (1959), Machine Learning (ML) has evolved into a science addressing various big data problems. The techniques developed for ML, such as decision tree learning Quinlan (1986) and artificial neural networks (ANN) Peterson et al. (1994), allow to train computers in order to perform specific tasks usually deemed to be complex for handwoven algorithms. For supervised learning

, the algorithm is first trained on labeled data, and then to classify testing data into the categories defined during training. In contrast, in

semi-supervised and unsupervised learning, where partially labeled or unlabeled data is provided, the algorithm is expected to find the relevant patterns unassistedly.

The last decade has seen a rapid progress in ML techniques, in particular the development of deep ANN. A deep ANN is a multi-layered network of threshold units LeCun et al. (2015)

. Each unit computes only a simple nonlinear function of its inputs, which allows each layer to represent a certain level of relevant features. Unlike traditional ML techniques (e.g. boosted decision trees) which rely heavily on expert-designed features in order to reduce the dimensionality of the problem, deep ANN automatically extract pertinent features from data, enabling data-mining without prior assumptions. Fueled by vast amounts of big data and the fast development in training techniques and parallel computing architectures, modern deep learning systems have achieved major successes in computer vision 

Krizhevsky et al. (2012), speech recognition Hinton et al. (2012)

, natural language processing 

Mikolov et al. (2013), and have recently emerged as a promising tool for scientific research Zdeborová (2017); Carleo and Troyer (2017); Carrasquilla and Melko (2017); Zhang et al. (2018), where the plethora of experimental data presents a challenge for insightful analysis.

High Energy Physics (HEP) is a big data science and has a long history of using supervised ML for data analysis. Recently, pioneering works have demonstrated the capability of deep ANN in understanding jet substructure 

Baldi et al. (2016); Butter et al. (2017); Larkoski et al. (2017); Macaluso and Shih (2018) and the identification of particles Baldi et al. (2014) or even whole signal signatures (see e.g. Cohen et al. (2018), where weakened supervised learning is applied). However, the primary goal of the HEP experiments is to detect predicted or unpredicted physics Beyond the Standard Model (BSM) in order to establish the underlying fundamental laws of nature. Despite its significant role in current data analysis, supervised ML techniques suffer from the model dependence introduced during training. This problem can potentially be addressed by the semi-supervised/unsupervised techniques developed for novelty detection (for a review see, e.g. Pimentel et al. (2014)). Novelty detection is the ML task to recognize data belonging to an unknown pattern. If being interpreted as novel signal, BSM physics could be detected without specifying an underlying theory during data analysis. Hence, a combination of novelty detection and supervised ML may lay out a framework for the future HEP data analysis.

Some preliminary and at least partially related efforts have been made at jet Metodiev et al. (2017); Andreassen et al. (2018) and event Aaltonen et al. (2008); CMS (2008); collaboration (2017); Kuusela et al. (2012); Collins et al. (2018); D’Agnolo and Wulzer (2018)

level. For novelty detection with given feature representation, its sensitivity depends crucially on the performance of novelty evaluators. Well-designed evaluators will allow to evaluate the data novelty efficiently and precisely. As a matter of fact the design of novelty evaluators or the relevant test statistics defines the frontier of novelty detection 

Pimentel et al. (2014). In this letter, we propose a set of density-based novelty evaluators. In contrast to traditional density-based ones, which only quantify isolation of testing data from the known patterns, the new novelty evaluators are sensitive to the clustering of testing data. On this basis, we design algorithms for novelty detection using an autoencoder, which are subsequently applied for detecting several BSM benchmarks at LHC and future colliders.

Ii Algorithms

Figure 1:

Novelty detection algorithm. The training and testing phases are marked in blue and red, respectively. Datasets, algorithm and probabilities are indicated by rectangular, elliptic and plain nodes, respectively. The information gathered during training and used for testing is marked by dashed red arrows. For clarity we have limited the number of labeled known patterns

to two. denotes testing data with known and unknown patterns.

Novelty detection using a deep ANN can be separated into three steps: 1) feature learning, 2) dimensional reduction, 3) novelty evaluation. During the first step the ANN is trained under supervision, using labeled known patterns. The nodes of the trained ANN contain the information gathered for classification and constitute the feature space, which has typically a large dimension. In order to reduce the sparse error and to improve the efficiency of the analysis, one removes the irrelevant features by dimensional reduction, which can be implemented using an autoencoder Vincent et al. (2008)

. An autoencoder is an ANN with identical number of nodes for input and output layers and fewer nodes for hidden layers. Its loss-function measures the difference between input and output, defined as the reconstruction error

. Here and

are the vectors of input and output nodes, respectively. Hence the autoencoder learns unsupervised how to reconstruct its input. This allows it to form a submanifold in the full feature space. Afterwards, the novelty of testing data is evaluated, for the final significance analysis. The algorithm is shown in FIG. 

1. For the HEP data analysis, the data with known and unknown patterns can be interpreted as SM background and BSM signal, respectively.

We generate Monte Carlo data using MadGraph5_aMC@NLO Alwall et al. (2014) and rely on Keras Chollet et al. (2015) (TensorFlow Abadi et al. (2015)-based) for the ANN construction. For the supervised classification of events with visible-particle four-momenta (which we internally normalise by ) and labeled patterns we use an ANN with input nodes, output nodes, and three hidden layers with 30, 30 and 10 nodes, respectively. We use Nesterov’s accelerated gradient descent optimizer Nesterov (1983) with a learning rate of 0.3, a learning momentum of 0.99 and a decay rate of . The batch size is fixed to be 30 and the loss function is the categorical cross entropy Rubinstein (1999, 2001). The collection of all nodes constitute the feature space with dimension . This ensures that it contains the non-linear information learned from classification. We normalize the axes of the feature space to and use

as activation function for the autoencoder. Finally, an autoencoder consisting of five hidden layers with 40, 20, 8, 20 and 40 nodes, respectively, and a learning rate of 2.0 projects this feature space onto an eight-dimensional sub-space. We have checked that the results of all ANNs are stable against variations in the numbers of hidden layers and nodes.

Iii Novelty Evaluation

(a) Training data.
(b) Testing data.
(c) performance.
(d) performance.
Figure 2: Comparison between traditional and new novelty evaluators. The toy-data is shown in panels (fig:toy_training) and (fig:toy_testing), while the novelty response is given in (fig:ols1) and (fig:ols2).

Novelty evaluation of testing data is a crucial step for novelty detection. Various approaches have been developed in the past decades Pimentel et al. (2014). For non-time series data, one of the most popular approaches is density-based Breunig et al. (2000)

, in which a Local Outlier Factor (LOF), i.e., the ratio of the local density of a given testing data and the local densities of its neighbors, is proposed as a novelty measure. Explicitly, this traditional measure is 

Kriegel et al. (2009); Socher et al. (2013)

(1)

here is the mean distance of a testing data to its nearest neighbors, is the average of the mean distances defined for its nearest neighbors, and

is the standard deviation of the latter. The subscript of “train” indicates that all quantities are defined w.r.t the training dataset. We calculate

using the method suggested in Kriegel et al. (2009); Socher et al. (2013)

. The probabilistic novelty evaluator can be defined as the cumulative distribution function

. Here is a normalization factor, defined as the root mean square of the measure values for all testing data. This evaluator measures the isolation of testing data from training data. A testing data located away from or at the tail of the training data distribution thus tends to be scored high by  Breunig et al. (2000); Kriegel et al. (2009).

However, is blind to the clustering of testing data which generically exists in the BSM datasets and may result in non-trivial structures such as resonance. In order to utilize this feature, we introduce a measure:

(2)

with being the dimension of the feature space. Here is the mean distance of the testing data to its nearest neighbors in the testing dataset, whereas is the SM prediction of the same, which can be approximately calculated using the training dataset. This measure is reminiscent of the test statistic introduced in Wang et al. (2006); Dasu et al. (2006)

, where similar idea is employed for estimating the divergence of data distribution. As

is approximately , with and being the numbers of signal and background events in a local bin with unit volume, this measure can be interpreted as the significance of discovery (up to a calibration constant) for this local bin.

(a) response.
(b) CLT.
Figure 3: Dependence of the response on , for the testing data with known patterns only. While the training dataset is composed of points, the testing dataset consists of , , , and 625 points, with scaling linearly as 100, 50, 25, 12 and 6, respectively. Both datasets are Gaussian. Panel (fig:toy_K_Measure) shows the response in all cases. Panel (fig:toy_K_SD) shows that its standard deviation scales linearly with or , as predicted by the CLT.

is defined in a similar way as does. To compare the performance of and

in probing the clustering, we introduce a toy model, where the data resides in a two-dimensional space. The known pattern is a Gaussian distribution centered around the origin

, while the unknown pattern is an overlapping narrow Gaussian distribution shifted away from the origin . The training dataset consists of events with known pattern (cf. FIG. 1(a)), while the testing dataset contains from each, known and unknown pattern, events (cf. FIG. 1(b)). As shown in FIG. 1(c) and FIG. 1(d), the clustering of the unknown-pattern data, although being hidden from , is picked-up by .

The detection based on (or ) however may suffer from fluctuations of the known-pattern testing data in the non-signal regions, via the term in Eq. (2). While is expected to be zero if the data only consists of events with known patterns, the fluctuations result in non-zero values, since the measure picks up local data excess. This in essence is a kind of Look Elsewhere Effect (LEE). The fluctuations in on the other hand can be neglected, as long as the training dataset used for calculating is much larger than the testing one, with being properly scaled.

The influence of fluctuations on detection sensitivity can be compensated for as the luminosity increases, if scales with . In this case more and more data are used to calculate

in the local bin which is barely changed. This compensation is approximately predicted by the Central Limit Theorem (CLT), which states in this context that the standard deviation of the

response scales with or , for the testing data with known patterns only. We show this in Fig 3, using the known-pattern Gaussian datasets defined before. Indeed, as the number of testing data increases, becomes less and less sensitive to the fluctuations (see Fig. 2(a)).

(a) New evaluator.
(b) Traditional evaluator.
(c) Combined evaluator.
(d) Significance.
Figure 4: Normalized data responses to the novelty evaluators  (fig:toy_trad),  (fig:toy_new) and  (fig:toy_tot), and significance performance of these evaluators (fig:toy_sensitivity).

If the fluctuations are not fully compensated for by luminosity, the known-pattern testing data could still be scored high by , and hence diminish the detection sensitivity. This is often true if is small, as typically occurs in the analyses at LHC. To address this potential problem, we propose one more evaluator

(3)

This evaluator utilizes the fact that the known-pattern testing data with high scores pretty often come from the high-density regions in the feature space, whereas such data are typically scored low by . As indicated in Fig. 4, performs very well in a typical case where the known and unknown-pattern data distributions are partially overlapped, and many of the known-pattern data, especially the ones in the central region, are scored high by due to the fluctuations. The known-pattern datasets used here are the same as before, containing events. The unknown pattern is defined as , with . Indeed, many high-scoring data of known pattern in Fig. 3(a) are pushed to the low-scoring end in Fig. 3(c), due to the compensation of . This effect results in improvement in sensitivity, compared to the ones based on or only. Here (and similarly below) the significance is calculated against the known unknown-pattern hypothesis for testing data, using the Poisson-probability-based test statistic Cowan et al. (2011).

Iv Study on Benchmark Scenarios

Parameter values
1.2 TeV, 0.152
, , 1.55
, 0.108
, 0.053
Table 1: Parameter values and cross sections (after preselection) in the benchmark scenarios of BSM physics.

In order to illustrate their performance, we apply the algorithms designed above to two parton-level analyses, with two BSM benchmarks defined for each. Though being unrealistic, it is sufficient for proof of concept.

In the first analysis, we simulate the final state at the 14 TeV LHC, with a luminosity of . We require exactly two bottom quarks with and two charged leptons ( and ) with . The SM background stems mainly from

  •  ,  ,

  •  ,  ,

  •  ,  .

Here the physical cross sections have been universally suppressed by a factor for simplification. The signal could arise from multiple BSM scenarios in this analysis. Here we consider:

where and are fermionic top partners,

where is a new gauge boson.

In the second analysis, we simulate unpolarized production with the final state at , with a luminosity of . We require exactly two bottom quarks with and two charged leptons ( and ) with . The SM background arises mainly from

  •  ,  ,

  •  ,  .

For BSM scenarios, we consider two specific modes of exotic Higgs decay Curtin et al. (2014):

. This decay topology can arise from the nearly Peccei-Quinn symmetric limit in the NMSSM Draper et al. (2011); Huang et al. (2014), where and are bino- and singlino-like neutralinos, respectively, and

is a light CP-odd scalar.

in the 2HDM and the NMSSM Curtin et al. (2014).

The parameter values and cross sections for the four benchmark scenarios are summarized in TAB. 1.

(a) Benchmark:
(b) Benchmark:
(c) Benchmark:
(d) Benchmark:
Figure 5: Significance performance of the novelty-detection algorithms.

The sensitivity performance of the algorithms is presented in FIG. 5. In each panel, we show one curve in the “Ideal” case (assuming 100% signal efficiency and background rejection) and one curve with supervised learning as the references for performance evaluation. In the first analysis, the toy model discussed above precisely mimics what happens in benchmark . In this case, the BSM signal and the SM data are partially overlapped in the feature space. Many of the SM data in the non-signal regions have a strong response, due to fluctuations, and hence diminish the detection sensitivity. However, with the compensation, sizable improvement in sensitivity is achieved. As shown in Fig. 4(a), the sensitivity is approximately doubled using , compared to the ones using or only. For benchmark , is about one order larger than that in benchmark , as indicated in TAB. 1. This tends to enhance the response of the signal, compared to the SM data, and hence results in comparable sensitivities for the analyses based on , and , respectively. For the benchmarks and in the second analysis, the fluctuation effect on is negligibly small, due to (typical for the analyses at collider), while the known- and unknown-pattern data distributions are not fully separated, hence limiting the efficiency of . This results in a sensitivity performance for which is universally better than the others.

V Summary and Discussion

In this letter, we proposed a set of density-based novelty evaluators, and , which are sensitive to the clustering of the unknown-pattern testing data, for novelty detection in the HEP data analysis. These evaluators allow to design the algorithms with broad applications in detecting BSM physics. They can be also applied to measuring the SM processes yet to be discovered, if we interpret them as “novel” events. As these algorithms are designed using only general assumptions their application could be extended to other big-data domains as well.

This study could be generalized in multiple directions. We have focused on developing the algorithms for novelty detection in HEP, using parton-level analysis to demonstrate their sensitivity performance. To fill up the gap between the concept and its application to real data analysis, hadron-level analysis is definitely needed. In addition, the algorithms could be improved in several aspects. First, the feature selection in the ANN training process might be not yet fully optimized. The features learned from classification of data with labeled known patterns are likely to be sub-optimal for enhancing the isolation or clustering of the unknown-pattern data. Nevertheless, we may introduce dynamical ML or some feedback mechanisms using the testing dataset, to reinforce the learning of the unknown-pattern features. Second, the distance definition of data depends on the geometry of the feature space. We adopted the Euclidean geometry for simplicity, but it is worthwhile to explore the other possibilities. Third, the amount of memory and time needed to implement

increases rapidly with the data size and dimension, which renders not very efficient for large dataset. Ways of accelerating the calculation might be needed. More than that, we would extend the performance analysis of the algorithms to other BSM scenarios, e.g., the ones with interference between the known and unknown patterns, or non-trivial data clusters such as a dip Dicus et al. (1994). Although it is beyond the scope of this study, at last we mention that, a full analysis of the systematic and theoretical uncertainties is absent (for recent effort partially addressing this see Englert et al. (2018)). We leave these topics to a future study.

Note added

While this letter was being finalized, De Simone and Jacques (2018) appeared. Both the novelty evaluators proposed here and the test statistic defined in De Simone and Jacques (2018) (as well as the one developed in D’Agnolo and Wulzer (2018) recently) are able to measure the clustering of testing data with unknown pattern. We would like to stress that we developed this project and the relevant ideas independently. Particularly, two significant differences exist between them. First, unlike the test statistic in D’Agnolo and Wulzer (2018); De Simone and Jacques (2018) which measures the divergence of the testing dataset from the training dataset, the evaluators proposed quantify the novelty of individual testing data. Such a design difference enables the evaluators to probe the fine/differential structure of the clustering such as peak-dip (a famous BSM example can be found in Dicus et al. (1994)) more efficiently. Second, as the LEE could be a severe problem for novelty detection at Hadron colliders, we explored how to diminish its influences on detection sensitivity (in relation to this, was designed). This was not developed in D’Agnolo and Wulzer (2018); De Simone and Jacques (2018).

v.1 Acknowledgments

Acknowledgements.
We would greatly thank Prof. Michael Wong, our colleague at the HKUST, for highly valuable discussions on the novelty evaluators and the ANN algorithms which were proposed in this letter. T. Liu would thank Huai-Ke Guo for discussions on CLT in this context during the MITP (the Mainz Institute for Theoretical Physics) workshop “Probing Baryogenesis via LHC and Gravitational Wave Signatures”, June, 2018. We would thank the experimental colleagues Aurelio Juste, Kirill Prokofiev and Junjie Zhu for reading the manuscript and raising valuable comments. We would also thank Lian-Tao Wang and Zhen Liu for general discussions on this idea at an early stage. J. Hajer is partly supported by the General Research Fund (GRF) under Grant № 16304315. Y. Y. Li would thank the Kavli Institute for Theoretical Physics, where most of her work was done, for the award of the graduate fellowship which was provided by Simons Foundation under Grant № 216179 and Gordon and Betty Moore Foundation under Grant № 4310. This research was also supported in part by the National Science Foundation under Grant № PHY-1748958. T. Liu is jointly supported by the GRF under Grant № 16312716 and 16302117. The GRF is issued by the Research Grants Council of Hong Kong S.A.R. He would also thank the MITP for its hospitality, where part of his work was done.

References