I Introduction
An important aspect of the legacy of the Large Hadron Collider (LHC) experiments will be precise constraints on indirect signatures of physics beyond the Standard Model (SM), parameterized for instance by the dimensionsix operators of the Standard Model effective field theory (SMEFT). The relevant measurements can easily involve tens of different parameters that predict subtle kinematic signatures in the highdimensional space of the data. Traditional analysis techniques do not scale well to this complex problem, motivating the development of more powerful techniques.
The analysis of highenergyphysics data is based on an impressive suite of simulation tools that model the hard interaction, parton shower, hadronization, and detector response. The community has invested a tremendous amount of effort into developing these tools, yielding the highfidelity modeling of LHC data needed for precision measurements. Simulators such as Pythia Sjostrand et al. (2008) and Geant4 Agostinelli et al. (2003)
use MonteCarlo techniques to sample the multitudinous paths through which a particular hard scattering might develop. A single event’s path through the simulation can easily involve many millions of random variables. While MonteCarlo techniques can efficiently sample from the distributions implicitly defined by the simulators, it is not feasible to calculate the likelihood for a particular observation because doing so would require integrating over all the possible histories leading to that observation. Clearly it is infeasible to explicitly calculate a numerical integral over this enormous latent space. While this problem is ubiquitous in high energy physics, it is rarely acknowledged explicitly.
Traditionally, particle physicists have approached this problem by restricting the analysis to one or two wellmotivated discriminating variables, discarding the information contained in the remaining observables. The probability density for the restricted set of discriminating variables is then estimated with explicit functions or nonparametric approaches such as template histograms, kernel density estimates, or Gaussian Processes
Cranmer (2001); *Cranmer:2012sba; *Frate:2017mai. These lowdimensional density estimates are constructed and validated using MonteCarlo samples from the simulation. While wellchosen variables may yield precise bounds along individual directions of the parameter space, they often lead to weak constraints in other directions in the parameter space Brehmer et al. (2017a). The sensitivity to multiple parameters can be substantially improved by using the fully differential cross section. This is the forte of the Matrix Element Method Kondo (1988); Abazov et al. (2004); Artoisenet and Mattelaer (2008); Gao et al. (2010); Alwall et al. (2011); Bolognesi et al. (2012); Avery et al. (2013); Andersen et al. (2013); Campbell et al. (2013); Artoisenet et al. (2013); Gainer et al. (2013); Schouten et al. (2015); Martini and Uwer (2015); Gritsan et al. (2016); Martini and Uwer (2017) and Optimal Observables Atwood and Soni (1992); Davier et al. (1993); Diehl and Nachtmann (1994) techniques, which are based on the partonlevel structure of a given process. Shower and event deconstruction Soper and Spannowsky (2011, 2013, 2014); Englert et al. (2016) extend this approach to the parton shower. But all these methods still require some level of approximations on the parton shower and either neglect or crudely approximate the detector response. Moreover, even a simplified description of the detector effects requires the numerically expensive evaluation of complicated integrals for each observed event. None of these established approaches scales well to highdimensional problems with many parameters and observables, such as the SMEFT measurements.In recent years there has been increased appreciation that several realworld phenomena are better described by simulators that do not admit a tractable likelihood. This appears in fields as diverse as ecology, phylogenetics, epidemiology, cardiac simulators, quantum chemistry, and particle physics. Inference in this setting is often referred to as likelihoodfree inference, where the inference strategy is restricted to samples generated from the simulator. Implicitly, these techniques aim to estimate the likelihood. A particularly ubiquitous technique is Approximate Bayesian Computation (Abc) Rubin (1984); Beaumont et al. (2002); Marjoram et al. (2003); Sisson et al. (2007); Sisson and Fan (2011); Marin et al. (2012); Charnock et al. (2018). Abc is closely related to the traditional template histogram and kernel density estimation approach used by physicists. More recently, approximate inference techniques based on machine learning and neural networks have been proposed Cranmer et al. (2015); Cranmer and Louppe (2016); Fan et al. (2012); Papamakarios and Murray (2016); Paige and Wood (2016); Dutta et al. (2016); Gutmann et al. (2017); Tran et al. (2017); Louppe and Cranmer (2017); Dinh et al. (2014); Jimenez Rezende and Mohamed (2015); Dinh et al. (2016); Papamakarios et al. (2017); Uria et al. (2016); van den Oord et al. (2016a, b, c); Papamakarios et al. (2018). All these techniques have in common that they only take into account simulated samples similar to the actual observables — they do not exploit the structure of the process that generates them.
We develop new simulationbased inference
techniques that are tailored to the structure of particle physics processes. The key insight behind these methods is that we can extract more information than just samples from the simulations, and that this additional information can be used to efficiently train neural networks that precisely estimate likelihood ratios, the preferred test statistics for LHC measurements. These methods are designed for scalability to both highdimensional parameter spaces as well as to many observables. They do not require any simplifying assumptions to the underlying physics: they support stateoftheart event generators with parton shower, reducible and irreducible backgrounds, and full detector simulations. After an upfront training phase, they are very efficient to evaluate. Our tools directly provide an estimator for the likelihood ratio, an intuitive and easily interpretable quantity. Finally, limits derived from these tools with toy experiments have the reassuring property that even if they might not be optimal, they are never wrong, i. e. no points are said to be excluded that should not be excluded at a given confidence level.
In Ref. Brehmer et al. (2018a), the companion paper of this publication, we focus on the key ideas and sensitivity enabled by these techniques. Reference Brehmer et al. (2018b) presents the methods in a more abstract setting. Here we describe the actual algorithms in detail, developing several different methods side by side. Given the number of discussed variations, this publication might have the look and feel of a review article and we present it as a guide to the interested practitioner. We focus on the main ideas and differences between the approaches and postpone many technical details until the appendices.
We evaluate the performance of these different methods on a specific example problem, the measurement of two dimensionsix operators in Higgs production in weak boson fusion (WBF) in the fourlepton mode at the LHC. For part of this analysis, we work in an idealized setting in which we can access the true likelihood function, providing us with a ground truth for the comparison of the different analysis methods. After establishing the precision of the likelihood ratio estimation, we turn towards the more physical question of how strongly the two operators can be constrained with the different techniques. We repeat the analysis with a simplified detector response where the groundtruth likelihood is no longer tractable.
We begin by laying out the problem in Sec. II: we summarize the effective field theory idea, list the challenges posed by EFT measurements, translate the problem from a physics perspective into the language of statistics, and discuss its important structural properties. We also set up the example process used throughout the rest of the paper. The description of the analysis methods are split in two parts: in Sec. III we define the different techniques to estimate the likelihood ratio, which includes most of the conceptual work presented here. Section IV then explains how to set limits on the EFT parameters based on these tools. In Sec. V we evaluate the performance of the different tools in our example process. Finally, in Sec. VI we summarize our findings and give recommendations for practitioners. The appendices describe the different algorithms in more detail and provide additional results. The code and data used for this paper are available online at Ref. Brehmer et al. (2018c).
Ii The EFT measurement problem
ii.1 Effective field theory
Effective field theories (EFTs) Coleman et al. (1969); Callan et al. (1969); Weinberg (1980) parameterize the effects of physics at an energy scale on observables at smaller energies as a set of local operators. The form of these operators is fixed by the light particles and the symmetry structure of the theory and is entirely independent of the highenergy model. Systematically expanding the Lagrangian in , equivalent to ordering the operators by their canonical dimension, leaves us with a finite set of operators weighted by Wilson coefficients that describe all possible new physics effects up to some order in .
In the absence of new particles at the TeV scale, and assuming the symmetry structure of the SM, we can thus describe any new physics signature in LHC processes in terms of a set of higherdimensional operators Burges and Schnitzer (1983); Leung et al. (1986); Buchmuller and Wyler (1986); Arzt et al. (1995); Hagiwara et al. (1993); Grzadkowski et al. (2010). In this SM Effective Field Theory (SMEFT), the leading effects beyond the SM come from 59 independent dimensionsix operators with Wilson coefficients ,
(1) 
where the SM corresponds to all and any measurement of a deviation hints at new physics.
The dimensionsix Wilson coefficients are perfectly suited as an interface between experimental measurements and theory interpretations. They are largely modelindependent, can parameterize a wide range of observables, including novel kinematic features, and are theoretically consistent beyond tree level. On the technical side, dimensionsix operators are implemented in standard MonteCarlo event generators Alwall et al. (2014), allowing us to generate predictions for rates and kinematic observables for any combination of Wilson coefficients. Measured values of can easily be translated to the parameters of specific models through wellestablished matching procedures Henning et al. (2016). All in all, SMEFT measurements will likely be a key part of the legacy of the LHC experiments de Florian et al. (2016).
Let us briefly comment on the question of EFT validity. A hierarchy of energy scales is the key assumption behind the EFT construction, but in a bottomup approach the cutoff scale cannot be known without additional model assumptions. From a measurement we can estimate the new physics scale only by assuming a characteristic size of the new physics couplings , and compare it to the energy scale of the experiment. It has been found that dimensionsix operators often capture the dominant effects of new physics even when there is only a moderate scale separation Brehmer et al. (2016). All these concerns are not primarily of interest for the measurement of Wilson coefficients, but rather important for the interpretation of the results in specific UV theories.
ii.2 Physics challenges and traditional methods
EFT measurements at the LHC face three fundamental challenges:

Individual scattering processes at the LHC are sensitive to several operators and require simultaneous inference over a multidimensional parameter space. While a naive parameter scan works well for one or two dimensions, it becomes prohibitively expensive for more than a few parameters.

Most operators introduce new coupling structures and predict nontrivial kinematic features. These do not translate onetoone to traditional kinematic observables such as transverse momenta, invariant masses or angular correlations. An analysis based on only one kinematic variable typically cannot constrain the full parameter space efficiently. Instead, most of the operator effects only become fully apparent when multiple such variables including their correlations are analysed Brehmer et al. (2017a, b).

The likelihood function of the observables is intractable, making this the setting of “likelihoodfree inference” or “simulatorbased inference”. There are simulators for the highenergy interactions, the parton shower, and detector effects that can generate events samples for any theory parameter values, but they can only be run in the forward mode. Given a set of reconstructionlevel observables, it is not possible to evaluate the likelihood of this observation given different theory parameters. The reason is that this likelihood includes the integral over all possible different parton shower histories and particle trajectories through the detector as a normalizing constant, which is infeasible to calculate in realistic situations. We will discuss this property in more detail in the following section.
The last two issues are typically addressed in one of three ways. Most commonly, a small set of discriminating variables (also referred to as summary statistics or engineered features) is handpicked for a given problem. The likelihood in this lowdimensional space is then estimated, for instance, by filling histograms from simulations. While wellchosen variables may lead to good constraints along individual directions of the parameter space, there are typically directions in the parameter space with limited sensitivity Brehmer et al. (2017a, b).
The Matrix Element Method Kondo (1988); Abazov et al. (2004); Gao et al. (2010); Alwall et al. (2011); Bolognesi et al. (2012); Avery et al. (2013); Andersen et al. (2013); Campbell et al. (2013); Artoisenet et al. (2013); Gainer et al. (2013); Martini and Uwer (2015); Gritsan et al. (2016); Martini and Uwer (2017) or Optimal Observables Atwood and Soni (1992); Davier et al. (1993); Diehl and Nachtmann (1994) go beyond a few specific discriminating variables and use the matrix element for a particular process to estimate the likelihood ratio. While these techniques can be very powerful, they suffer from two serious limitations. The parton shower and detector response are either entirely neglected or approximated through adhoc transfer function. Shower and event deconstruction Soper and Spannowsky (2011, 2013, 2014); Englert et al. (2016) allow for the calculation of likelihood ratios at the level of the parton shower, but still rely on transfer functions to describe the detector response. Finally, even with such a simple description of the shower and detector, the evaluation of the likelihood ratio estimator requires the numerically expensive computation of large integrals for each observed event.
Finally, there is a class of generic methods for likelihoodfree inference. For Bayesian inference, the bestknown approach is Approximate Bayesian Computation (
Abc) Rubin (1984); Beaumont et al. (2002); Marjoram et al. (2003); Sisson et al. (2007); Sisson and Fan (2011); Marin et al. (2012). Similar to the histogram approach, it relies on the choice of appropriate lowdimensional summary statistics, which can severely limit the sensitivity of the analysis. Different techniques based on machine learning have been developed recently. In particle physics, the most common example are discriminative classifiers between two discrete hypotheses, such as a signal and a background process. This approach has recently been extended to parameter measurements
Cranmer et al. (2015); Cranmer and Louppe (2016). More generally, many techniques based on the idea of using a classification model, such as neural networks, for inference in the absence of a tractable likelihood function have been introduced in the machine learning community Fan et al. (2012); Papamakarios and Murray (2016); Paige and Wood (2016); Dutta et al. (2016); Gutmann et al. (2017); Tran et al. (2017); Louppe and Cranmer (2017); Dinh et al. (2014); Jimenez Rezende and Mohamed (2015); Dinh et al. (2016); Papamakarios et al. (2017); Uria et al. (2016); van den Oord et al. (2016a, b, c); Papamakarios et al. (2018). All of these methods only require samples of events trained according to different parameter points. They do not make use of the structure of the particle physics processes, and thus do not use all available information.All of these methods come with a price. We develop new techniques that

are tailored to particle physics measurements and leverage their structural properties,

scale well to highdimensional parameter spaces,

can accommodate many observables,

capture the information in the fully differential cross sections, including all correlations between observables,

fully support stateofthe art simulators with parton showers and full detector simulations, and

are very efficient to evaluate after an upfront training phase.
ii.3 Structural properties of EFT measurements
ii.3.1 Particlephysics structure
One essential step to finding the optimal measurement strategy is identifying the structures and symmetries of the problem. Particle physics processes, in particular those described by effective field theories, typically have two key properties that we can exploit.
First, any highenergy particle physics process factorizes into the partonlevel process, which contains the matrix element and in it the entire dependence on the EFT coefficients, and a residual part describing the parton shower and detector effects. In many plausible scenarios of new physics neither the strong interactions in the parton shower nor the electromagnetic and strong interactions in the detector are affected by the parameters of interest. The likelihood function can then be written as
(2) 
Here and in the following are the actual observables after the shower, detector, and reconstruction; are the theory parameters of interest; and are the partonlevel momenta (a subset of the latent variables). Table 1 provides a dictionary of these and other important symbols that we use.
The first ingredient to this likelihood function is the distribution of partonlevel fourmomenta
(3) 
where and are the total and differential cross sections, respectively. Crucially, this function is tractable: the matrix element and the parton density functions can be evaluated for arbitrary fourmomenta and parameter values . In practice this means that matrixelement codes such as MadGraph Alwall et al. (2014) can not only be run in a forward, generative mode, but also define functions that return the squared matrix element for a given phasespace point . Unfortunately, there is typically no userfriendly interface to these functions, so evaluating it requires some work.
Second, the conditional density describes the probabilistic evolution from the partonlevel fourmomenta to observable particle properties. While this symbol looks innocuous, it represents the full parton shower, the interaction of particles with the detector material, the sensor response and readout, and the reconstruction of observables. Different simulators such as Pythia Sjostrand et al. (2008), Geant4 Agostinelli et al. (2003), or Delphes de Favereau et al. (2014) are often used to generate samples for given partonlevel momenta . This sampling involves the MonteCarlo integration over the possible shower histories and detector interactions,
(4) 
This enormous latent space can easily involve many millions of random numbers, and these integrals are clearly intractable, which we denote with the red symbol . In other words, given a set of reconstructionlevel observables , we cannot calculate the likelihood function that describes the compatibility of partonlevel momenta with the observation. By extension, we also cannot evaluate , the likelihood function of the theory parameters given the observation. The intractable integrals in Eq. (4) are the crux of the EFT measurement problem.
The factorization of Eq. 2 together with the tractability of the partonlevel likelihood is immensely important. We will refer to the combination of these two properties as particlephysics structure. The farreaching consequences of this structure for EFT measurements will be the topic of Sec. III.2. Many (but not all) of the inference strategies we discuss will rely on this condition.
Note that this Markov property holds even with reducible and irreducible backgrounds and when a matching scheme is used to combine different partonlevel multiplicities. In these situations there may be different disjoint parts of space, even with different dimensionalities, for instance when events with and partons in the final state can lead to the same configuration of observed jets. The integral over then has to be replaced with a sum over “ spaces” and an integral over each , but the logic remains unchanged.
Symbol  Physics meaning  Machine learning abstraction 

Set of all observables  Features  
One or two kinematic variables  Lowdimensional summary statistics /engineered feature  
Partonlevel fourmomenta  Latent variables  
Parton shower trajectories  Latent variables  
Detector interactions  Latent variables  
Full simulation history of event  All latent variables  
Theory parameters (Wilson coefficients)  Parameters of interest  
Best fit for theory parameters  Estimator for parameters of interest  
Distributions of observables given theory parameters  Intractable likelihood  
Partonlevel distributions from matrix element  Tractable likelihood of latent variables  
Effect of shower, detector, reconstruction  Intractable density defined through stochastic generative process  
Likelihood ratio between hypotheses , , see Eq. (11).  
Estimator for likelihood ratio  
Score, see Eq. (14).  
Estimator for score  
,  Event  Data point 
Wilson coefficient for one operator  Individual parameter of interest  
, ,  Morphing basis points, coefficients, densities, see Eq. (6). 
ii.3.2 Operator morphing
Effective field theories (and other parameterisations of indirect signatures of new physics) typically contribute a finite number of amplitudes to a given process, each of which is multiplied by a function of the Wilson coefficients.^{1}^{1}1Exceptions can arise for instance when particle masses or widths depend on the parameters of interest. But in an EFT setting one can expand these quantities in , restoring the factorization. In this case the likelihood can be written as
(5) 
where labels the different amplitude components, and the functions are not necessarily properly positive definite or normalized.
The simplest example is a process in which one SM amplitude interferes with one new physics amplitude , which scales linearly with a new physics parameter . The differential cross section, proportional to the squared matrix element, is then . There are three components, representing the SM, interference, and pure BSM terms, each with their own parameter dependence and momentum dependence .
We can then pick a number of basis^{2}^{2}2Note that the morphing basis points are unrelated to the choice of an operator basis for the effective field theory. parameter points equal to the number of components in Eq. . They can always be chosen such that the matrix is invertible, which allows us to rewrite as a mixture model
(6) 
with weights and (now properly normalized) basis densities . The weights depend on the choice of basis points and are analytically known. This “morphing” procedure therefore allows us to extract the full likelihood function from a finite set of evaluations of basis densities .
Calculating the full statistical model through morphing requires the likelihood to be tractable, which is true for partonlevel momenta as argued above. However, the same trick can be applied even when the exact likelihood is intractable, but we can estimate it. For instance, the marginal distribution of any individual kinematic variable can be reliably estimated through histograms or other density estimation techniques, even when shower and detector effects are taken into account. The morphing procedure then lets us evaluate the full conditional distribution based on a finite number of MonteCarlo simulations Aad et al. (2015).
Finally, note that Eq. (6) together with Eq. (2) imply
(7) 
even if the likelihood function and the components are intractable. This will later allow us to impose the morphing structure on likelihood ratio estimators.
Not all EFT amplitudes satisfy the morphing structure in Eq. (5), so we discuss both measurement strategies that rely on and make use of this property as well as more general ones that do not require it to hold.
ii.4 Explicit example
ii.4.1 Weakbosonfusion Higgs to four leptons
As an explicit example LHC process we consider Higgs production in weak boson fusion (WBF) with a decay of the Higgs into four leptons,
(8) 
with , as shown in Fig. 1.
While this process is rare and is likely to only be observed during the highluminosity run of the LHC, it has a few compelling features that make it a prime candidate to study the efficient extraction of information. First, the two jets from the quarks and in particular the four leptons can be reconstructed quite precisely in the LHC detectors. Even when assuming onshell conditions and energymomentum conservation, the finalstate momenta span a 16dimensional phase space, giving rise to many potentially informative observables.
Second, both the production of the Higgs boson in weak boson fusion as well as its decay into four leptons are highly sensitive to the effects of new physics in the Higgsgauge sector. We parameterize these with dimensionsix operators in the SMEFT, following the conventions of the HagiwaraIshiharaSzalapskiZeppenfeld basis Hagiwara et al. (1993). For simplicity, we limit our analysis to the two particularly relevant operators
(9) 
For convenience, we rescale the Wilson coefficients to the dimensionless parameters of interest
(10) 
where is the electroweak vacuum expectation value. As alluded to above, the validity range of the EFT cannot be determined in a modelindependent way. For moderately weakly to moderately strongly coupled underlying new physics models, one would naively expect and the EFT description to be useful in the range , or . This is the parameter range we analyse in this paper.
The interference between the Standard Model amplitudes and the dimensionsix operators leads to an intricate relation between the observables and parameters in this process, which has been studied extensively. The precise measurement of the momenta of the four leptons provides access to a range of angular correlations that fully characterize the decay Dell’Aquila and Nelson (1986); Bolognesi et al. (2012). These variables are sensitive to the effects of dimensionsix operators. But the momentum flow through the decay vertex is limited by the Higgs mass, and the relative effects of these dimensionsix operators are suppressed by a factor . On the other hand, the Higgs production through two offshell gauge bosons with potentially high virtuality does not suffer from this suppression. The properties of the two jets recoiling against them are highly sensitive to operator effects in this vertex Plehn et al. (2002); Hankele et al. (2006); Hagiwara et al. (2009); Englert et al. (2013).
In Fig. 2 we show example distributions of two particularly informative observables, the transverse momentum of the leading (higher) jet , and the azimuthal angle between the two jets, . The two quantities are sensitive to different directions in parameter space. Note also that the interference between the different amplitudes can give rise to nontrivial effects. The size of the dimensionsix amplitudes grows with momentum transfer, which is strongly correlated with the transverse momentum of the leading jet. If the interference of newphysics amplitudes with the SM diagrams is destructive, this can drive the total amplitude through zero Brehmer et al. (2016). The jet momentum distribution then dips and rises again with higher energies, as seen in the red curve in the left panel of Fig. 2. Such depleted regions of low probability can lead to very small or large likelihood ratios and potentially pose a challenge to inference methods.
By analysing the Fisher information in these distributions, it is possible to compare the discrimination power in these two observables to the information contained in the full multivariate distribution or to the information in the total rate. It turns out that the full multivariate distribution contains significantly more information than the onedimensional and twodimensional marginal distributions of any standard kinematic variables Brehmer et al. (2017a). The total rate is found to carry much less information on the two operators, in particular when systematic uncertainties on the cross sections are taken into account. In this study we therefore only analyse the kinematic distributions for a fixed number of observed events.
ii.4.2 Sample generation
Already in the sample generation we can make use of the structural properties of the process discussed in Sec. II.3. The amplitude of this process factorizes into a sum of parameterdependent factors times phasespacedependent amplitudes, as given in Eq. (5). The effect of the operators and on the total Higgs width breaks this decomposition, but this effect is tiny and in practice irrelevant when compared to the experimental resolution. The likelihood function of this process therefore follows the mixture model in Eq. (6) to good approximation, and the weights can be calculated. Since the partonlevel likelihood function is tractable, we can reconstruct the entire likelihood function based on a finite number of simulator runs, as described in Sec. II.3.2.
To this end, we first generate a partonlevel sample of events with MadGraph 5 Alwall et al. (2014) and its addon MadMax Cranmer and Plehn (2007); Plehn et al. (2014); Kling et al. (2017), using the setup described in Ref. Brehmer et al. (2017a). With MadMax we can evaluate the likelihood for all events and for 15 different basis parameter points . Calculating the morphing weights finally gives us the true partonlevel likelihood function for each generated phasespace point .
In Fig. 3 we show the basis points and two of the morphing weights with their dependence on . In some corners of parameter space the weights easily reach up to , and there are large cancellations between positive and negative weights. This will pose a challenge for the numerical stability of every inference algorithm that directly uses the morphing structure of the process, as we will discuss later. Other basis choices have led to comparable or larger morphing weights.
Parton shower and detector effects smear the observed particle properties with respect to the partonlevel momenta and make the likelihood function in Eq. (2) intractable. We develop inference methods that can be applied exactly in this case and that do not require any simplifying assumptions on the shower and detector response. However, in this realistic scenario we cannot evaluate their performance by comparing them to the true likelihood ratio. We therefore test them first on an idealized scenario in which the fourmomenta, flavor, and charges of the leptons, and the momenta of the partons, can be measured exactly, . In this approximation we can evaluate the likelihood .
After establishing the performance of the various algorithms in this idealized setup, we will analyse the effect of parton shower and detector simulation on the results. We generate an approximate detectorlevel sample by drawing events from a smearing distribution conditional on the partonlevel momenta . This smearing function is loosely motivated by the performance of the LHC experiments and is defined in Appendix A.1.
Iii Likelihood ratio estimation
According to the NeymanPearson lemma, the likelihood ratio
(11) 
is the most powerful test statistic to discriminate between two hypotheses and . Unfortunately, the integral over the latent space makes the likelihood function as well as the likelihood ratio intractable. The first and crucial stage of all our EFT measurement strategies is therefore the construction of a likelihood ratio estimator that is as close to the true as possible and thus maximizes the discrimination power between and .
This estimation problem has several different aspects that we try to disentangle as much as possible. The first choice is the overall structure of the likelihood ratio estimator and its dependence on the theory parameters . We discuss this in Sec. III.1. Section III.2 analyses what information is available and useful to construct (train) the estimators for a given process. Here we will introduce the main ideas that harness the structure of the EFT to increase the information that is used in the training process.
These basic concepts are combined into concrete strategies for the estimation of the likelihood ratio in Sec. III.3. After training the estimators, there is an optional additional calibration stage, which we introduce in Sec. III.4. Section III.5 describes the technical implementation of these strategies in terms of neural networks. Finally, we discuss the challenges that the different algorithms face in Sec. III.6 and introduce diagnostic tools for the uncertainties.
iii.1 Modeling likelihood ratios
iii.1.1 Likelihood ratios
There are different approaches to the structure of this estimator, in particular to the dependence on the theory parameters :
 Point by point (PbP)

A common strategy is to scan the parameter space, randomly or in a grid. To reduce the complexity of the scan one can keep the denominator fixed, while scanning only . Likelihood ratios with other denominators can be extracted trivially as . Instead of a single reference value , we can also use a composite reference hypothesis with some prior . This can reduce the regions in feature space with small reference likelihood and improve the numerical stability.
For each pair separately, the likelihood ratio as a function of
is estimated. Only the final results are interpolated between the scanned values of
.This approach is particularly simple, but discards all information about the structure and smoothness of the parameter space. For highdimensional parameter spaces, the parameter scan can become prohibitively expensive. The final interpolation may introduce additional uncertainties.
 Agnostic parameterized estimators

Alternatively we can train one estimator as the full model as a function of both and the parameter combination Cranmer et al. (2015); Baldi et al. (2016). A modification is again to leave at a fixed reference value (or fixed composite reference hypothesis with a prior ) and only learn the dependence on and .
This parameterized approach leaves it to the estimator to learn the typically smooth dependence of the likelihood ratio on the physics parameters and does not require any interpolation in the end. There are no assumptions on the form of the dependence of the likelihood on the ratios.
 Morphingaware estimators

For problems that satisfy the morphing condition of Eq. (6) and thus also Eq. (7), we can impose this structure and the explicit knowledge of the weights onto the estimator. Again, one option is to keep the denominator fixed at a reference value (or composite reference hypothesis), leading to
(12) where the basis estimators only depend on .
iii.1.2 Score and local model
One remarkably powerful quantity is the score, defined as the relative tangent vector
(14) 
It quantifies the relative change of the likelihood under infinitesimal changes in parameter space and can be seen as a local equivalent of the likelihood ratio.
In a small patch around in which we can approximate as independent of , Eq. (14) is solved by the local model
(15) 
with a normalisation factor . The local model is in the exponential family. Note that the are the sufficient statistics for . This is significant: if we can estimate the vectorvalued function (with one component per parameter of interest) of the highdimensional , we can reduce the dimensionality of our space dramatically without losing any information, at least in the local model approximation Alsing et al. (2018).
In fact, ignoring the normalization factors and in the local model the likelihood ratio between and only depends on the scalar product between the score and
, which will allow us to take this dimensionality reduction one step further and compress highdimensional data
into a scalar without loss of power.In our example process, we are interested in the Wilson coefficients of two dimensionsix operators. The score vector therefore has two components. In Fig. 4 we show the relation between these two score components and two informative kinematic variables, the jet and the azimuthal angle between the two jets, . We find that the score vector is very closely related with these two kinematic quantities, but the relation is not quite onetoone. Larger energy transfer, measured as larger jet , increases the typical size of the score vector. The component of the score is particularly sensitive to the angular correlation variable, in agreement with detailed studies of this process Brehmer et al. (2017a).
iii.2 Available information and its usefulness
iii.2.1 General likelihoodfree case
All measurement strategies have in common that the estimator is learned from data provided by MonteCarlo simulations (the stochastic generative process). In the most general likelihoodfree scenario, we can only generate samples of events with through the simulator, and base an estimator on these generated samples.
One strategy Cranmer et al. (2015) is based on training a classifier with decision function between two equalsized samples , labelled , and , labelled
. The crossentropy loss functional
(16) 
is minimized by the optimal decision function
(17) 
From the decision function of a classifier we can therefore extract an estimator for the likelihood ratio as
(18) 
This idea, sometimes called the likelihood ratio trick, is visualized in the left panel of Fig. 5.
As pointed out in Ref. Cranmer et al. (2015), we can use the weaker assumption of any loss functional that is minimized by a decision function that is a strictly monotonic function of the likelihood ratio. The underlying reason is that the likelihood ratio is invariant under any transformation with this property. In practice, the output of any such classifier can be brought closer to the form of Eq. (17) through a calibration procedure, which we will discuss in Sec. III.4.
iii.2.2 Particlephysics structure
As we have argued in Sec. II.3, particle physics processes have a specific structure that allow us to extract additional information. Most processes satisfy the factorization of Eq. (2) with a tractable partonlevel likelihood . The generators do not only provide samples , but also the corresponding partonlevel momenta (latent variables) with . By evaluating the matrix elements at the generated momenta for different hypotheses and , we can extract the partonlevel likelihood ratio . Since the distribution of is conditionally independent of the theory parameters, this is the same as the joint likelihood ratio