A Guide to Constraining Effective Field Theories with Machine Learning

by   Johann Brehmer, et al.

We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments. By harnessing the latent-space structure of particle physics processes, we extract extra information from the simulator. This augmented data can be used to train neural networks that precisely estimate the likelihood ratio. The new methods scale well to many observables and high-dimensional parameter spaces, do not require any approximations of the parton shower and detector response, and can be evaluated in microseconds. Using weak-boson-fusion Higgs production as an example process, we compare the performance of several techniques. The best results are found for likelihood ratio estimators trained with extra information about the score, the gradient of the log likelihood function with respect to the theory parameters. The score also provides sufficient statistics that contain all the information needed for inference in the neighborhood of the Standard Model. These methods enable us to put significantly stronger bounds on effective dimension-six operators than the traditional approach based on histograms. They also outperform generic machine learning methods that do not make use of the particle physics structure, demonstrating their potential to substantially improve the new physics reach of the LHC legacy results.


page 11

page 28


Constraining Effective Field Theories with Machine Learning

We present powerful new analysis techniques to constrain effective field...

Simulation-based inference methods for particle physics

Our predictions for particle physics processes are realized in a chain o...

MadMiner: Machine learning-based inference for particle physics

The legacy measurements of the LHC will require analyzing high-dimension...

Effective LHC measurements with matrix elements and machine learning

One major challenge for the legacy measurements at the LHC is that the l...

Mining for Dark Matter Substructure: Inferring subhalo population properties from strong lenses with machine learning

The subtle and unique imprint of dark matter substructure on extended ar...

Extreme data compression while searching for new physics

Bringing a high-dimensional dataset into science-ready shape is a formid...

I Introduction

An important aspect of the legacy of the Large Hadron Collider (LHC) experiments will be precise constraints on indirect signatures of physics beyond the Standard Model (SM), parameterized for instance by the dimension-six operators of the Standard Model effective field theory (SMEFT). The relevant measurements can easily involve tens of different parameters that predict subtle kinematic signatures in the high-dimensional space of the data. Traditional analysis techniques do not scale well to this complex problem, motivating the development of more powerful techniques.

The analysis of high-energy-physics data is based on an impressive suite of simulation tools that model the hard interaction, parton shower, hadronization, and detector response. The community has invested a tremendous amount of effort into developing these tools, yielding the high-fidelity modeling of LHC data needed for precision measurements. Simulators such as Pythia Sjostrand et al. (2008) and Geant4 Agostinelli et al. (2003)

use Monte-Carlo techniques to sample the multitudinous paths through which a particular hard scattering might develop. A single event’s path through the simulation can easily involve many millions of random variables. While Monte-Carlo techniques can efficiently sample from the distributions implicitly defined by the simulators, it is not feasible to calculate the likelihood for a particular observation because doing so would require integrating over all the possible histories leading to that observation. Clearly it is infeasible to explicitly calculate a numerical integral over this enormous latent space. While this problem is ubiquitous in high energy physics, it is rarely acknowledged explicitly.

Traditionally, particle physicists have approached this problem by restricting the analysis to one or two well-motivated discriminating variables, discarding the information contained in the remaining observables. The probability density for the restricted set of discriminating variables is then estimated with explicit functions or non-parametric approaches such as template histograms, kernel density estimates, or Gaussian Processes 

Cranmer (2001); *Cranmer:2012sba; *Frate:2017mai. These low-dimensional density estimates are constructed and validated using Monte-Carlo samples from the simulation. While well-chosen variables may yield precise bounds along individual directions of the parameter space, they often lead to weak constraints in other directions in the parameter space Brehmer et al. (2017a). The sensitivity to multiple parameters can be substantially improved by using the fully differential cross section. This is the forte of the Matrix Element Method Kondo (1988); Abazov et al. (2004); Artoisenet and Mattelaer (2008); Gao et al. (2010); Alwall et al. (2011); Bolognesi et al. (2012); Avery et al. (2013); Andersen et al. (2013); Campbell et al. (2013); Artoisenet et al. (2013); Gainer et al. (2013); Schouten et al. (2015); Martini and Uwer (2015); Gritsan et al. (2016); Martini and Uwer (2017) and Optimal Observables Atwood and Soni (1992); Davier et al. (1993); Diehl and Nachtmann (1994) techniques, which are based on the parton-level structure of a given process. Shower and event deconstruction Soper and Spannowsky (2011, 2013, 2014); Englert et al. (2016) extend this approach to the parton shower. But all these methods still require some level of approximations on the parton shower and either neglect or crudely approximate the detector response. Moreover, even a simplified description of the detector effects requires the numerically expensive evaluation of complicated integrals for each observed event. None of these established approaches scales well to high-dimensional problems with many parameters and observables, such as the SMEFT measurements.

In recent years there has been increased appreciation that several real-world phenomena are better described by simulators that do not admit a tractable likelihood. This appears in fields as diverse as ecology, phylogenetics, epidemiology, cardiac simulators, quantum chemistry, and particle physics. Inference in this setting is often referred to as likelihood-free inference, where the inference strategy is restricted to samples generated from the simulator. Implicitly, these techniques aim to estimate the likelihood. A particularly ubiquitous technique is Approximate Bayesian Computation (AbcRubin (1984); Beaumont et al. (2002); Marjoram et al. (2003); Sisson et al. (2007); Sisson and Fan (2011); Marin et al. (2012); Charnock et al. (2018). Abc is closely related to the traditional template histogram and kernel density estimation approach used by physicists. More recently, approximate inference techniques based on machine learning and neural networks have been proposed Cranmer et al. (2015); Cranmer and Louppe (2016); Fan et al. (2012); Papamakarios and Murray (2016); Paige and Wood (2016); Dutta et al. (2016); Gutmann et al. (2017); Tran et al. (2017); Louppe and Cranmer (2017); Dinh et al. (2014); Jimenez Rezende and Mohamed (2015); Dinh et al. (2016); Papamakarios et al. (2017); Uria et al. (2016); van den Oord et al. (2016a, b, c); Papamakarios et al. (2018). All these techniques have in common that they only take into account simulated samples similar to the actual observables — they do not exploit the structure of the process that generates them.

We develop new simulation-based inference

techniques that are tailored to the structure of particle physics processes. The key insight behind these methods is that we can extract more information than just samples from the simulations, and that this additional information can be used to efficiently train neural networks that precisely estimate likelihood ratios, the preferred test statistics for LHC measurements. These methods are designed for scalability to both high-dimensional parameter spaces as well as to many observables. They do not require any simplifying assumptions to the underlying physics: they support state-of-the-art event generators with parton shower, reducible and irreducible backgrounds, and full detector simulations. After an upfront training phase, they are very efficient to evaluate. Our tools directly provide an estimator for the likelihood ratio, an intuitive and easily interpretable quantity. Finally, limits derived from these tools with toy experiments have the reassuring property that even if they might not be optimal, they are never wrong, i. e. no points are said to be excluded that should not be excluded at a given confidence level.

In Ref. Brehmer et al. (2018a), the companion paper of this publication, we focus on the key ideas and sensitivity enabled by these techniques. Reference Brehmer et al. (2018b) presents the methods in a more abstract setting. Here we describe the actual algorithms in detail, developing several different methods side by side. Given the number of discussed variations, this publication might have the look and feel of a review article and we present it as a guide to the interested practitioner. We focus on the main ideas and differences between the approaches and postpone many technical details until the appendices.

We evaluate the performance of these different methods on a specific example problem, the measurement of two dimension-six operators in Higgs production in weak boson fusion (WBF) in the four-lepton mode at the LHC. For part of this analysis, we work in an idealized setting in which we can access the true likelihood function, providing us with a ground truth for the comparison of the different analysis methods. After establishing the precision of the likelihood ratio estimation, we turn towards the more physical question of how strongly the two operators can be constrained with the different techniques. We repeat the analysis with a simplified detector response where the ground-truth likelihood is no longer tractable.

We begin by laying out the problem in Sec. II: we summarize the effective field theory idea, list the challenges posed by EFT measurements, translate the problem from a physics perspective into the language of statistics, and discuss its important structural properties. We also set up the example process used throughout the rest of the paper. The description of the analysis methods are split in two parts: in Sec. III we define the different techniques to estimate the likelihood ratio, which includes most of the conceptual work presented here. Section IV then explains how to set limits on the EFT parameters based on these tools. In Sec. V we evaluate the performance of the different tools in our example process. Finally, in Sec. VI we summarize our findings and give recommendations for practitioners. The appendices describe the different algorithms in more detail and provide additional results. The code and data used for this paper are available online at Ref. Brehmer et al. (2018c).

Ii The EFT measurement problem

ii.1 Effective field theory

Effective field theories (EFTs) Coleman et al. (1969); Callan et al. (1969); Weinberg (1980) parameterize the effects of physics at an energy scale on observables at smaller energies as a set of local operators. The form of these operators is fixed by the light particles and the symmetry structure of the theory and is entirely independent of the high-energy model. Systematically expanding the Lagrangian in , equivalent to ordering the operators by their canonical dimension, leaves us with a finite set of operators weighted by Wilson coefficients that describe all possible new physics effects up to some order in .

In the absence of new particles at the TeV scale, and assuming the symmetry structure of the SM, we can thus describe any new physics signature in LHC processes in terms of a set of higher-dimensional operators Burges and Schnitzer (1983); Leung et al. (1986); Buchmuller and Wyler (1986); Arzt et al. (1995); Hagiwara et al. (1993); Grzadkowski et al. (2010). In this SM Effective Field Theory (SMEFT), the leading effects beyond the SM come from 59 independent dimension-six operators with Wilson coefficients ,


where the SM corresponds to all and any measurement of a deviation hints at new physics.

The dimension-six Wilson coefficients are perfectly suited as an interface between experimental measurements and theory interpretations. They are largely model-independent, can parameterize a wide range of observables, including novel kinematic features, and are theoretically consistent beyond tree level. On the technical side, dimension-six operators are implemented in standard Monte-Carlo event generators Alwall et al. (2014), allowing us to generate predictions for rates and kinematic observables for any combination of Wilson coefficients. Measured values of can easily be translated to the parameters of specific models through well-established matching procedures Henning et al. (2016). All in all, SMEFT measurements will likely be a key part of the legacy of the LHC experiments de Florian et al. (2016).

Let us briefly comment on the question of EFT validity. A hierarchy of energy scales is the key assumption behind the EFT construction, but in a bottom-up approach the cutoff scale cannot be known without additional model assumptions. From a measurement we can estimate the new physics scale only by assuming a characteristic size of the new physics couplings , and compare it to the energy scale of the experiment. It has been found that dimension-six operators often capture the dominant effects of new physics even when there is only a moderate scale separation  Brehmer et al. (2016). All these concerns are not primarily of interest for the measurement of Wilson coefficients, but rather important for the interpretation of the results in specific UV theories.

ii.2 Physics challenges and traditional methods

EFT measurements at the LHC face three fundamental challenges:

  1. Individual scattering processes at the LHC are sensitive to several operators and require simultaneous inference over a multi-dimensional parameter space. While a naive parameter scan works well for one or two dimensions, it becomes prohibitively expensive for more than a few parameters.

  2. Most operators introduce new coupling structures and predict non-trivial kinematic features. These do not translate one-to-one to traditional kinematic observables such as transverse momenta, invariant masses or angular correlations. An analysis based on only one kinematic variable typically cannot constrain the full parameter space efficiently. Instead, most of the operator effects only become fully apparent when multiple such variables including their correlations are analysed Brehmer et al. (2017a, b).

  3. The likelihood function of the observables is intractable, making this the setting of “likelihood-free inference” or “simulator-based inference”. There are simulators for the high-energy interactions, the parton shower, and detector effects that can generate events samples for any theory parameter values, but they can only be run in the forward mode. Given a set of reconstruction-level observables, it is not possible to evaluate the likelihood of this observation given different theory parameters. The reason is that this likelihood includes the integral over all possible different parton shower histories and particle trajectories through the detector as a normalizing constant, which is infeasible to calculate in realistic situations. We will discuss this property in more detail in the following section.

The last two issues are typically addressed in one of three ways. Most commonly, a small set of discriminating variables (also referred to as summary statistics or engineered features) is handpicked for a given problem. The likelihood in this low-dimensional space is then estimated, for instance, by filling histograms from simulations. While well-chosen variables may lead to good constraints along individual directions of the parameter space, there are typically directions in the parameter space with limited sensitivity Brehmer et al. (2017a, b).

The Matrix Element Method Kondo (1988); Abazov et al. (2004); Gao et al. (2010); Alwall et al. (2011); Bolognesi et al. (2012); Avery et al. (2013); Andersen et al. (2013); Campbell et al. (2013); Artoisenet et al. (2013); Gainer et al. (2013); Martini and Uwer (2015); Gritsan et al. (2016); Martini and Uwer (2017) or Optimal Observables Atwood and Soni (1992); Davier et al. (1993); Diehl and Nachtmann (1994) go beyond a few specific discriminating variables and use the matrix element for a particular process to estimate the likelihood ratio. While these techniques can be very powerful, they suffer from two serious limitations. The parton shower and detector response are either entirely neglected or approximated through ad-hoc transfer function. Shower and event deconstruction Soper and Spannowsky (2011, 2013, 2014); Englert et al. (2016) allow for the calculation of likelihood ratios at the level of the parton shower, but still rely on transfer functions to describe the detector response. Finally, even with such a simple description of the shower and detector, the evaluation of the likelihood ratio estimator requires the numerically expensive computation of large integrals for each observed event.

Finally, there is a class of generic methods for likelihood-free inference. For Bayesian inference, the best-known approach is Approximate Bayesian Computation (

AbcRubin (1984); Beaumont et al. (2002); Marjoram et al. (2003); Sisson et al. (2007); Sisson and Fan (2011); Marin et al. (2012)

. Similar to the histogram approach, it relies on the choice of appropriate low-dimensional summary statistics, which can severely limit the sensitivity of the analysis. Different techniques based on machine learning have been developed recently. In particle physics, the most common example are discriminative classifiers between two discrete hypotheses, such as a signal and a background process. This approach has recently been extended to parameter measurements 

Cranmer et al. (2015); Cranmer and Louppe (2016). More generally, many techniques based on the idea of using a classification model, such as neural networks, for inference in the absence of a tractable likelihood function have been introduced in the machine learning community Fan et al. (2012); Papamakarios and Murray (2016); Paige and Wood (2016); Dutta et al. (2016); Gutmann et al. (2017); Tran et al. (2017); Louppe and Cranmer (2017); Dinh et al. (2014); Jimenez Rezende and Mohamed (2015); Dinh et al. (2016); Papamakarios et al. (2017); Uria et al. (2016); van den Oord et al. (2016a, b, c); Papamakarios et al. (2018). All of these methods only require samples of events trained according to different parameter points. They do not make use of the structure of the particle physics processes, and thus do not use all available information.

All of these methods come with a price. We develop new techniques that

  • are tailored to particle physics measurements and leverage their structural properties,

  • scale well to high-dimensional parameter spaces,

  • can accommodate many observables,

  • capture the information in the fully differential cross sections, including all correlations between observables,

  • fully support state-of-the art simulators with parton showers and full detector simulations, and

  • are very efficient to evaluate after an upfront training phase.

ii.3 Structural properties of EFT measurements

ii.3.1 Particle-physics structure

One essential step to finding the optimal measurement strategy is identifying the structures and symmetries of the problem. Particle physics processes, in particular those described by effective field theories, typically have two key properties that we can exploit.

First, any high-energy particle physics process factorizes into the parton-level process, which contains the matrix element and in it the entire dependence on the EFT coefficients, and a residual part describing the parton shower and detector effects. In many plausible scenarios of new physics neither the strong interactions in the parton shower nor the electromagnetic and strong interactions in the detector are affected by the parameters of interest. The likelihood function can then be written as


Here and in the following are the actual observables after the shower, detector, and reconstruction; are the theory parameters of interest; and are the parton-level momenta (a subset of the latent variables). Table 1 provides a dictionary of these and other important symbols that we use.

The first ingredient to this likelihood function is the distribution of parton-level four-momenta


where and are the total and differential cross sections, respectively. Crucially, this function is tractable: the matrix element and the parton density functions can be evaluated for arbitrary four-momenta and parameter values . In practice this means that matrix-element codes such as MadGraph Alwall et al. (2014) can not only be run in a forward, generative mode, but also define functions that return the squared matrix element for a given phase-space point . Unfortunately, there is typically no user-friendly interface to these functions, so evaluating it requires some work.

Second, the conditional density describes the probabilistic evolution from the parton-level four-momenta to observable particle properties. While this symbol looks innocuous, it represents the full parton shower, the interaction of particles with the detector material, the sensor response and readout, and the reconstruction of observables. Different simulators such as Pythia Sjostrand et al. (2008), Geant4 Agostinelli et al. (2003), or Delphes de Favereau et al. (2014) are often used to generate samples for given parton-level momenta . This sampling involves the Monte-Carlo integration over the possible shower histories and detector interactions,


This enormous latent space can easily involve many millions of random numbers, and these integrals are clearly intractable, which we denote with the red symbol . In other words, given a set of reconstruction-level observables , we cannot calculate the likelihood function that describes the compatibility of parton-level momenta with the observation. By extension, we also cannot evaluate , the likelihood function of the theory parameters given the observation. The intractable integrals in Eq. (4) are the crux of the EFT measurement problem.

The factorization of Eq. 2 together with the tractability of the parton-level likelihood is immensely important. We will refer to the combination of these two properties as particle-physics structure. The far-reaching consequences of this structure for EFT measurements will be the topic of Sec. III.2. Many (but not all) of the inference strategies we discuss will rely on this condition.

Note that this Markov property holds even with reducible and irreducible backgrounds and when a matching scheme is used to combine different parton-level multiplicities. In these situations there may be different disjoint parts of space, even with different dimensionalities, for instance when events with and partons in the final state can lead to the same configuration of observed jets. The integral over then has to be replaced with a sum over “ spaces” and an integral over each , but the logic remains unchanged.

Symbol Physics meaning Machine learning abstraction
Set of all observables Features
One or two kinematic variables Low-dimensional summary statistics /engineered feature
Parton-level four-momenta Latent variables
Parton shower trajectories Latent variables
Detector interactions Latent variables
Full simulation history of event All latent variables
Theory parameters (Wilson coefficients) Parameters of interest
Best fit for theory parameters Estimator for parameters of interest
Distributions of observables given theory parameters Intractable likelihood
Parton-level distributions from matrix element Tractable likelihood of latent variables
Effect of shower, detector, reconstruction Intractable density defined through stochastic generative process
Likelihood ratio between hypotheses , , see Eq. (11).
Estimator for likelihood ratio
Score, see Eq. (14).
Estimator for score
, Event Data point
Wilson coefficient for one operator Individual parameter of interest
, , Morphing basis points, coefficients, densities, see Eq. (6).
Table 1: Dictionary defining many symbols that appear in this paper. Red symbols denote intractable likelihood functions. The last three rows explain our conventions for indices.

ii.3.2 Operator morphing

Effective field theories (and other parameterisations of indirect signatures of new physics) typically contribute a finite number of amplitudes to a given process, each of which is multiplied by a function of the Wilson coefficients.111Exceptions can arise for instance when particle masses or widths depend on the parameters of interest. But in an EFT setting one can expand these quantities in , restoring the factorization. In this case the likelihood can be written as


where labels the different amplitude components, and the functions are not necessarily properly positive definite or normalized.

The simplest example is a process in which one SM amplitude interferes with one new physics amplitude , which scales linearly with a new physics parameter . The differential cross section, proportional to the squared matrix element, is then . There are three components, representing the SM, interference, and pure BSM terms, each with their own parameter dependence and momentum dependence .

We can then pick a number of basis222Note that the morphing basis points are unrelated to the choice of an operator basis for the effective field theory. parameter points equal to the number of components in Eq. . They can always be chosen such that the matrix is invertible, which allows us to rewrite as a mixture model


with weights and (now properly normalized) basis densities . The weights depend on the choice of basis points and are analytically known. This “morphing” procedure therefore allows us to extract the full likelihood function from a finite set of evaluations of basis densities .

Calculating the full statistical model through morphing requires the likelihood to be tractable, which is true for parton-level momenta as argued above. However, the same trick can be applied even when the exact likelihood is intractable, but we can estimate it. For instance, the marginal distribution of any individual kinematic variable can be reliably estimated through histograms or other density estimation techniques, even when shower and detector effects are taken into account. The morphing procedure then lets us evaluate the full conditional distribution based on a finite number of Monte-Carlo simulations Aad et al. (2015).

Finally, note that Eq. (6) together with Eq. (2) imply


even if the likelihood function and the components are intractable. This will later allow us to impose the morphing structure on likelihood ratio estimators.

Not all EFT amplitudes satisfy the morphing structure in Eq. (5), so we discuss both measurement strategies that rely on and make use of this property as well as more general ones that do not require it to hold.

ii.4 Explicit example

ii.4.1 Weak-boson-fusion Higgs to four leptons

(0,15)(15,15) (150,70) 0.8ptarrow_len2mmi2,i1 o6,o5,o4,o3,o2,o1 i1 i2 o1 o2 o3 o4 o5 o6 fermion,tension=4i1,v3 fermion,tension=4i2,v4 fermion,tension=2.5v3,o1 fermion,tension=2.5v4,o6 wiggly,label=,, ,label.side=rightv3,v5 wiggly,label=,, ,label.side=leftv4,v5 dashes,label=,tension=0.5v5,v6 wiggly,tension=0.3,label=,tension=0.3,label.side=rightv7,v6 wiggly,tension=0.3,label=,tension=0.3,label.side=rightv6,v8 fermion,tension=0.2o2,v7,o3 fermion,tension=0.2o4,v8,o5 decoration.shape=circle,foreground=(0.8,,0.,,0.18),decoration.size=8v5,v6

Figure 1: Feynman diagram for Higgs production in weak boson fusion in the mode. The red dots show the Higgs-gauge interactions affected by the dimension-six operators of our analysis.

As an explicit example LHC process we consider Higgs production in weak boson fusion (WBF) with a decay of the Higgs into four leptons,


with , as shown in Fig. 1.

While this process is rare and is likely to only be observed during the high-luminosity run of the LHC, it has a few compelling features that make it a prime candidate to study the efficient extraction of information. First, the two jets from the quarks and in particular the four leptons can be reconstructed quite precisely in the LHC detectors. Even when assuming on-shell conditions and energy-momentum conservation, the final-state momenta span a 16-dimensional phase space, giving rise to many potentially informative observables.

Second, both the production of the Higgs boson in weak boson fusion as well as its decay into four leptons are highly sensitive to the effects of new physics in the Higgs-gauge sector. We parameterize these with dimension-six operators in the SMEFT, following the conventions of the Hagiwara-Ishihara-Szalapski-Zeppenfeld basis Hagiwara et al. (1993). For simplicity, we limit our analysis to the two particularly relevant operators


For convenience, we rescale the Wilson coefficients to the dimensionless parameters of interest


where is the electroweak vacuum expectation value. As alluded to above, the validity range of the EFT cannot be determined in a model-independent way. For moderately weakly to moderately strongly coupled underlying new physics models, one would naively expect and the EFT description to be useful in the range , or . This is the parameter range we analyse in this paper.

The interference between the Standard Model amplitudes and the dimension-six operators leads to an intricate relation between the observables and parameters in this process, which has been studied extensively. The precise measurement of the momenta of the four leptons provides access to a range of angular correlations that fully characterize the decay Dell’Aquila and Nelson (1986); Bolognesi et al. (2012). These variables are sensitive to the effects of dimension-six operators. But the momentum flow through the decay vertex is limited by the Higgs mass, and the relative effects of these dimension-six operators are suppressed by a factor . On the other hand, the Higgs production through two off-shell gauge bosons with potentially high virtuality does not suffer from this suppression. The properties of the two jets recoiling against them are highly sensitive to operator effects in this vertex Plehn et al. (2002); Hankele et al. (2006); Hagiwara et al. (2009); Englert et al. (2013).

Figure 2: Kinematic distributions in our example process for three example parameter points. We assume an idealized detector response to be discussed in Sec. II.4.2. Left: transverse momentum of the leading (higher-) jet, a variable strongly correlated with the momentum transfer in the process. The dip around 350 is a consequence of the amplitude being driven through zero, as discussed in the text. Right: separation in azimuthal angle between the two jets.

In Fig. 2 we show example distributions of two particularly informative observables, the transverse momentum of the leading (higher-) jet , and the azimuthal angle between the two jets, . The two quantities are sensitive to different directions in parameter space. Note also that the interference between the different amplitudes can give rise to non-trivial effects. The size of the dimension-six amplitudes grows with momentum transfer, which is strongly correlated with the transverse momentum of the leading jet. If the interference of new-physics amplitudes with the SM diagrams is destructive, this can drive the total amplitude through zero Brehmer et al. (2016). The jet momentum distribution then dips and rises again with higher energies, as seen in the red curve in the left panel of Fig. 2. Such depleted regions of low probability can lead to very small or large likelihood ratios and potentially pose a challenge to inference methods.

By analysing the Fisher information in these distributions, it is possible to compare the discrimination power in these two observables to the information contained in the full multivariate distribution or to the information in the total rate. It turns out that the full multivariate distribution contains significantly more information than the one-dimensional and two-dimensional marginal distributions of any standard kinematic variables Brehmer et al. (2017a). The total rate is found to carry much less information on the two operators, in particular when systematic uncertainties on the cross sections are taken into account. In this study we therefore only analyse the kinematic distributions for a fixed number of observed events.

ii.4.2 Sample generation

Already in the sample generation we can make use of the structural properties of the process discussed in Sec. II.3. The amplitude of this process factorizes into a sum of parameter-dependent factors times phase-space-dependent amplitudes, as given in Eq. (5). The effect of the operators and on the total Higgs width breaks this decomposition, but this effect is tiny and in practice irrelevant when compared to the experimental resolution. The likelihood function of this process therefore follows the mixture model in Eq. (6) to good approximation, and the weights can be calculated. Since the parton-level likelihood function is tractable, we can reconstruct the entire likelihood function based on a finite number of simulator runs, as described in Sec. II.3.2.

To this end, we first generate a parton-level sample of events with MadGraph 5 Alwall et al. (2014) and its add-on MadMax Cranmer and Plehn (2007); Plehn et al. (2014); Kling et al. (2017), using the setup described in Ref. Brehmer et al. (2017a). With MadMax we can evaluate the likelihood for all events and for 15 different basis parameter points . Calculating the morphing weights finally gives us the true parton-level likelihood function for each generated phase-space point .

Figure 3: Basis points and some of the morphing weights for our example process. Each panel shows the morphing weight of one of the components as a function of parameter space. The weights of the remaining 13 components (not shown) follow qualitatively similar patterns. The dots show the position of the basis points , the big black dot denotes the basis point corresponding to the morphing weight shown in that panel. Away from the morphing basis points, the morphing weights can easily reach , with large cancellations between different components.

In Fig. 3 we show the basis points and two of the morphing weights with their dependence on . In some corners of parameter space the weights easily reach up to , and there are large cancellations between positive and negative weights. This will pose a challenge for the numerical stability of every inference algorithm that directly uses the morphing structure of the process, as we will discuss later. Other basis choices have led to comparable or larger morphing weights.

Parton shower and detector effects smear the observed particle properties with respect to the parton-level momenta and make the likelihood function in Eq. (2) intractable. We develop inference methods that can be applied exactly in this case and that do not require any simplifying assumptions on the shower and detector response. However, in this realistic scenario we cannot evaluate their performance by comparing them to the true likelihood ratio. We therefore test them first on an idealized scenario in which the four-momenta, flavor, and charges of the leptons, and the momenta of the partons, can be measured exactly, . In this approximation we can evaluate the likelihood .

After establishing the performance of the various algorithms in this idealized setup, we will analyse the effect of parton shower and detector simulation on the results. We generate an approximate detector-level sample by drawing events from a smearing distribution conditional on the parton-level momenta . This smearing function is loosely motivated by the performance of the LHC experiments and is defined in Appendix A.1.

Iii Likelihood ratio estimation

According to the Neyman-Pearson lemma, the likelihood ratio


is the most powerful test statistic to discriminate between two hypotheses and . Unfortunately, the integral over the latent space makes the likelihood function as well as the likelihood ratio intractable. The first and crucial stage of all our EFT measurement strategies is therefore the construction of a likelihood ratio estimator that is as close to the true as possible and thus maximizes the discrimination power between and .

This estimation problem has several different aspects that we try to disentangle as much as possible. The first choice is the overall structure of the likelihood ratio estimator and its dependence on the theory parameters . We discuss this in Sec. III.1. Section III.2 analyses what information is available and useful to construct (train) the estimators for a given process. Here we will introduce the main ideas that harness the structure of the EFT to increase the information that is used in the training process.

These basic concepts are combined into concrete strategies for the estimation of the likelihood ratio in Sec. III.3. After training the estimators, there is an optional additional calibration stage, which we introduce in Sec. III.4. Section III.5 describes the technical implementation of these strategies in terms of neural networks. Finally, we discuss the challenges that the different algorithms face in Sec. III.6 and introduce diagnostic tools for the uncertainties.

iii.1 Modeling likelihood ratios

iii.1.1 Likelihood ratios

There are different approaches to the structure of this estimator, in particular to the dependence on the theory parameters :

Point by point (PbP)

A common strategy is to scan the parameter space, randomly or in a grid. To reduce the complexity of the scan one can keep the denominator fixed, while scanning only . Likelihood ratios with other denominators can be extracted trivially as . Instead of a single reference value , we can also use a composite reference hypothesis with some prior . This can reduce the regions in feature space with small reference likelihood and improve the numerical stability.

For each pair separately, the likelihood ratio as a function of

is estimated. Only the final results are interpolated between the scanned values of


This approach is particularly simple, but discards all information about the structure and smoothness of the parameter space. For high-dimensional parameter spaces, the parameter scan can become prohibitively expensive. The final interpolation may introduce additional uncertainties.

Agnostic parameterized estimators

Alternatively we can train one estimator as the full model as a function of both and the parameter combination  Cranmer et al. (2015); Baldi et al. (2016). A modification is again to leave at a fixed reference value (or fixed composite reference hypothesis with a prior ) and only learn the dependence on and .

This parameterized approach leaves it to the estimator to learn the typically smooth dependence of the likelihood ratio on the physics parameters and does not require any interpolation in the end. There are no assumptions on the form of the dependence of the likelihood on the ratios.

Morphing-aware estimators

For problems that satisfy the morphing condition of Eq. (6) and thus also Eq. (7), we can impose this structure and the explicit knowledge of the weights onto the estimator. Again, one option is to keep the denominator fixed at a reference value (or composite reference hypothesis), leading to


where the basis estimators only depend on .

Alternatively, we can decompose both the numerator and denominator distributions to find Cranmer et al. (2015, 2016)


with pairwise estimators .

iii.1.2 Score and local model

One remarkably powerful quantity is the score, defined as the relative tangent vector


It quantifies the relative change of the likelihood under infinitesimal changes in parameter space and can be seen as a local equivalent of the likelihood ratio.

In a small patch around in which we can approximate as independent of , Eq. (14) is solved by the local model


with a normalisation factor . The local model is in the exponential family. Note that the are the sufficient statistics for . This is significant: if we can estimate the vector-valued function (with one component per parameter of interest) of the high-dimensional , we can reduce the dimensionality of our space dramatically without losing any information, at least in the local model approximation Alsing et al. (2018).

In fact, ignoring the normalization factors and in the local model the likelihood ratio between and only depends on the scalar product between the score and

, which will allow us to take this dimensionality reduction one step further and compress high-dimensional data

into a scalar without loss of power.

Figure 4: Score vector as a function of kinematic observables in our example process. Left: first component of the score vector, representing the relative change of the likelihood with respect to small changes in direction. Right: second component of the score vector, representing the relative change of the likelihood with respect to small changes in direction. In both panels, the axes show two important kinematic variables. We find that the score vector is clearly correlated with these two variables.

In our example process, we are interested in the Wilson coefficients of two dimension-six operators. The score vector therefore has two components. In Fig. 4 we show the relation between these two score components and two informative kinematic variables, the jet and the azimuthal angle between the two jets, . We find that the score vector is very closely related with these two kinematic quantities, but the relation is not quite one-to-one. Larger energy transfer, measured as larger jet , increases the typical size of the score vector. The component of the score is particularly sensitive to the angular correlation variable, in agreement with detailed studies of this process Brehmer et al. (2017a).

iii.2 Available information and its usefulness

iii.2.1 General likelihood-free case

All measurement strategies have in common that the estimator is learned from data provided by Monte-Carlo simulations (the stochastic generative process). In the most general likelihood-free scenario, we can only generate samples of events with through the simulator, and base an estimator on these generated samples.

One strategy Cranmer et al. (2015) is based on training a classifier with decision function between two equal-sized samples , labelled , and , labelled

. The cross-entropy loss functional


is minimized by the optimal decision function


From the decision function of a classifier we can therefore extract an estimator for the likelihood ratio as


This idea, sometimes called the likelihood ratio trick, is visualized in the left panel of Fig. 5.

Figure 5: Illustration of some key concepts with a one-dimensional Gaussian toy example. Left: classifiers trained to distinguish two sets of events generated from different hypotheses (green dots) converge to an optimal decision function (in red) given in Eq. (17). This lets us extract the likelihood ratio. Right: regression on the joint likelihood ratios of the simulated events (green dots) converges to the likelihood ratio (red line).

As pointed out in Ref. Cranmer et al. (2015), we can use the weaker assumption of any loss functional that is minimized by a decision function that is a strictly monotonic function of the likelihood ratio. The underlying reason is that the likelihood ratio is invariant under any transformation with this property. In practice, the output of any such classifier can be brought closer to the form of Eq. (17) through a calibration procedure, which we will discuss in Sec. III.4.

iii.2.2 Particle-physics structure

As we have argued in Sec. II.3, particle physics processes have a specific structure that allow us to extract additional information. Most processes satisfy the factorization of Eq. (2) with a tractable parton-level likelihood . The generators do not only provide samples , but also the corresponding parton-level momenta (latent variables) with . By evaluating the matrix elements at the generated momenta for different hypotheses and , we can extract the parton-level likelihood ratio . Since the distribution of is conditionally independent of the theory parameters, this is the same as the joint likelihood ratio