Detecting new signals under background mismodelling

by   Sara Algeri, et al.
University of Minnesota

Searches for new astrophysical phenomena often involve several sources of non-random uncertainties which can lead to highly misleading results. Among these, model-uncertainty arising from background mismodelling can dramatically compromise the sensitivity of the experiment under study. Specifically, overestimating the background distribution in the signal region increases the chances of missing new physics. Conversely, underestimating the background outside the signal region leads to an artificially enhanced sensitivity and a higher likelihood of claiming false discoveries. The aim of this work is to provide a unified statistical strategy to perform modelling, estimation, inference, and signal characterization under background mismodelling. The method proposed allows to incorporate the (partial) scientific knowledge available on the background distribution and provides a data-updated version of it in a purely nonparametric fashion without requiring the specification of prior distributions. Applications in the context of dark matter searches and radio surveys show how the tools presented in this article can be used to incorporate non-stochastic uncertainty due to instrumental noise and to overcome violations of classical distributional assumptions in stacking experiments.



There are no comments yet.


page 1

page 2

page 3

page 4


Sensitivity optimization of multichannel searches for new signals

The frequentist definition of sensitivity of a search for new phenomena ...

Simulation Assisted Likelihood-free Anomaly Detection

Given the lack of evidence for new particle discoveries at the Large Had...

Nonparametric semisupervised classification for signal detection in high energy physics

Model-independent searches in particle physics aim at completing our kno...

Advanced Multi-Variate Analysis Methods for New Physics Searches at the Large Hadron Collider

Between the years 2015 and 2019, members of the Horizon 2020-funded Inno...

Towards a method to anticipate dark matter signals with deep learning at the LHC

We study several simplified dark matter (DM) models and their signatures...

Beyond Cuts in Small Signal Scenarios - Enhanced Sneutrino Detectability Using Machine Learning

We investigate enhancing the sensitivity of new physics searches at the ...

What is the Machine Learning?

Applications of machine learning tools to problems of physical interest ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

When searching for new physics, a discovery claim is made if the data collected by the experiment provides sufficient statistical evidence in favor of the new phenomenon. If the background and signal distributions are specified correctly, this can be done by means of statistical tests of hypothesis, upper limits and confidence intervals.

The problem. In practice, even if a reliable description of the signal distribution is available, providing accurate background models may be challenging, as the behavior of the sources which contribute to it is often poorly understood. Some examples include searches for nuclear recoils of weakly interacting massive particles over electron recoils backgrounds aprile18 , agnese18 , for gravitational-wave signals over non-Gaussian backgrounds from stellar-mass binary black holes smith18 , and for a standard model-like Higgs boson over prompt diphoton production CMS18 .

Unfortunately, model uncertainty due to background mismodelling can significantly compromise the sensitivity of the experiment under study. Specifically, overestimating the background distribution in the signal region increases the chances of missing new physics. Conversely, underestimating the background outside the signal region leads to an artificially enhanced sensitivity, which can easily result in false discovery claims. Several methods have been proposed in literature to address this problem [e.g., yellin, , Priel, , dauncey, ]. However, to the best of the author’s knowledge, none of the methods available allow to (i) assess the validity of existing models for the background, (ii) fully characterize the background distribution, (iii) perform signal detection even if the signal distribution is not available, (iv) characterize the signal distribution, and (v) detect additional signals of new unexpected sources.

Goal. The aim of this work is to present a unified strategy which integrates modelling, estimation, and inference under background mismodelling and allows the simultaneous performance of (i)-(v). As a brief overview, given a source-free sample and the (partial) scientific knowledge available on the background distribution, a data-updated version of it is obtained in a purely nonparametric fashion without requiring the specification of prior distributions. At this stage, a graphical tool is provided in order to assess if and where significant deviations between the true and the postulated background distributions occur. The “updated” background distribution is then used to assess if the distribution of the data collected by the experiment deviates significantly from the background model. Also in this case, it is possible to assess graphically how the data distribution deviates from the expected background model. If a source-free sample is available or if control regions can be identified, the solution proposed does not require the specification of a model for the signal; however, if the signal distribution is known (up to some free parameters), the latter can be used to further improve the accuracy of the analysis and to detect the signals of unexpected new sources. The method discussed in this manuscript can be easily adjusted to cover for situations in which a source-free sample or control regions are not available, the background is unknown or incorrectly specified but a functional form of the signal distribution is known.

The key of the solution. The statistical methodologies involved rely on the novel LP approach to statistical modelling first introduced by Mukhopadhyay and Parzen in 2014 LPapproach

. This approach allows the unification of many of the standard results of classical statistics by expressing them in terms of quantiles and comparison distributions and provides a simple and powerful framework for statistical learning and data analysis. The interested reader is directed to

LPmode , LPBayes , LPtime , LPFdr , LPdec and references therein, for recent advancements in mode detection, nonparametric time series, goodness-of-fit on prior distributions, and large-scale inference using an LP approach.

Organization. Section II is dedicated to a review of the main constructs of LP modelling. Section III highlights the practical advantages offered by modelling background distributions using an LP approach. Section IV introduces a novel LP-based framework for statistical inference. Section V outlines the main steps of a data-scientific learning strategy for signal detection and characterization. In Section VI, the methods proposed are applied to a realistic simulation of the Fermi-LAT -ray satellite, where the source of uncertainty on the background distribution is due to ignoring the effect of the detector. Section VII is dedicated to model-denoising. Section VIII presents an application to data from the NVSS astronomical survey and discusses a simple approach to assess the validity of distributional assumptions on the polarized intensity in stacking experiments. A discussion of the main results and extensions is proposed in Section IX.

Ii LP Approach to Statistical Modelling

The LP Approach to Statistical Modelling [LPapproach, ] is a novel statistical approach which provides an ideal framework to simultaneously assess the validity of the scientific knowledge available and fill the gap between the initial scientific belief and the evidence provided by the data. The main constructs of LP modelling are outlined below, whereas Section III discusses their usefulness in addressing the problem of background mismodelling.

ii.1 The skew-G density model


is a sample of independent and identically distributed (i.i.d.) observations from a continuous random variable

. Let and

be, respectively, the cumulative distribution function (cdf) and the probability density function (pdf) of

.111One can think of a pdf as a non-negative curve which is normalized to integrate to one, whereas its cdf corresponds to . Since is the true distribution of the data, it is typically unknown. However, suppose a suitable cdf is available, and let be the respective pdf. In order to understand if is a good candidate for , it is convenient to express the relationship among the two in a concise manner. The skew-G density model [LPmode, ] is a universal representation scheme defined by


where is called comparison density manny and it is such that


Specifically, setting we have that corresponds to the density of the random variable and takes values over . Thus, (2) can be rewritten as


where denotes the “postulated” quantile function of . Letting , for , a sample of i.i.d. observations from is given by .

Practical remarks. Looking at (1), it is immediate to see that the comparison density models the departure of the true density from the postulated model . Furthermore, if , for all , i.e.,

is uniformly distributed over the interval

. Consequently, an adequate estimate of , not only leads to an estimate of the true based on (1), but it also allows to identify the regions where deviates substantially from .

ii.2 LP density estimate

Under the assumption that (3) is a square integrable function over the range , i.e., , it is possible to represent by means of a series of normalized shifted Legendre Polynomials,222Classical Legendre polynomials are defined over ; here, their “shifted” counterpart over the range

is considered. Finally, the shifted Legendre polynomials are normalized using the Gram-Schmidt process. The first few normalized shifted Legendre polynomials are:

, , , etc. namely , with coefficients , i.e.,


The coefficients in (4) can be estimated by


with mean

and variance

, where (see larry ). Further, it can be shown LPmode that, when ,


and, as


where denotes convergence in distribution.

If (4) is approximated by the first terms,333Recall that the first normalized shifted Legendre polynomial is . an estimate of the comparison density is given by


with standard error


where .

Estimators of the form (8) are also known in literature as sample moments approximant provostA , provost

. An important property of this class of estimators is that the respective coefficients can be specified as a linear combination of the moments of

provost . Similarly, each estimate can be expressed as a linear combination of the first sample moments of , e.g.,

where , . Therefore, the truncation point can be interpreted as the order of the highest moment considered to characterize the distribution of . (The reader is directed to Section IV.3 for a discussion on the choice of .)

Finally, the skew-G density model in (1) implies that



Figure 1: Upper panel: histogram of a source-free sample simulated from the tail of a Gaussian with mean and width (green solid line). The candidate background distribution is given by the best fit of a second-degree polynomial (red dashed line), and it is updated using the source-free data by means of (14) (purple chained line). Bottom panel: comparison density estimate (blue solid line) plotted on the -scale and respective standard errors (light blue area) computed as in (9).

Practical remarks. Despite orthogonal polynomials being widely used tools for density estimation, when applied to directly, the analysis is limited to a mere estimation of the true pdf of the data. Conversely, in LP modelling, the shifted Legendre polynomials are of fundamental importance in revealing the scientific insights enclosed in . In this respect, plays the role of an “oracle” distribution which provides a deeper understanding of the sensitivity of the analysis and enriches the scientist’s knowledge on the nature of both the signal and the background distribution. This sets the ground for a novel data-scientific learning framework in which the data and the scientist iteratively learn from each other, and “convergence of the solution” is achieved when a mutual agreement between data-driven and scientific knowledge is reached (see Section V).

Iii Data-driven corrections for misspecified background models

Let be a data sample on which we would like to assess whether the signal of interest is present or not. Suppose that, in addition to , a source-free dataset is given. The latter may be a sample of observations from control regions or the result of Monte Carlo simulations, assuming that no signal is present. Hereafter, refer to as the physics sample and to as the source-free sample. Notice that, if no source is present, both and are distributed according to . Therefore, can be used to “learn” the unknown pdf of the background, namely .

Despite the true background model being unknown, suppose that a candidate pdf, namely , is available. The candidate model can be specified from previous experiments or theoretical results or can be obtained by fitting specific functions (e.g., polynomial, exponential, etc.) to . If does not provide an accurate description of , the sensitivity of the experiment can be strongly affected.

Consider, for instance, a source-free sample of observations whose true (unknown) distribution corresponds to the tail of a Gaussian with mean and width over the range , i.e.,


with . Suppose that a candidate model for the background is obtained by fitting a second-degree polynomial on the source-free sample and adequately normalizing it in order to obtain a proper pdf, i.e.,


with . For illustrative purposes, assume that the distribution of the signal is a Gaussian centered at , with width and pdf


with . The histogram of the source-free sample along with (11)-(13) is shown in Fig. 1. At the higher end of the spectrum, the postulated background (red dashed line) underestimates the true background distribution (green solid line). As a result, using (12) as background model increases the chance of false discoveries in this region. Conversely, at the lower end of the spectrum, underestimates , reducing the sensitivity of the analysis.

It is important to point out that, the discrepancy of from is typically due to the fact that the specific functional form imposed (in our example, a second-degree polynomial) is not adequate for the data. Thus, changing the values of the fitted parameters (or assigning priors to them) is unlikely to solve the problem. However, it is possible to “repair” and obtain a suitable estimate of by means of (10). Specifically, can be estimated via


where is the comparison density estimated via (8) on the sample , whereas and are the true and the postulated background distributions, with pdfs as in (11) and (12), respectively.

In our example, choosing (see Section IV.3), we obtain


where and are the first and second normalized shifted Legendre polynomials evaluated at . Notice that, by combining (14) and (15), we can easily write the background model using of a series of shifted Legendre polynomials. This may be especially useful when dealing with complicated likelihoods and for which a functional form is difficult to specify.

The upper panel of Fig. 1 shows that the “calibrated” background model in (14) as a purple chained line and matches almost exactly the true background density in (11) (green solid line). The plot of in the bottom panel of Fig. 1 provides important insights on the deficiencies of (12) as a candidate background model. Specifically, the magnitude and the direction of the departure of from one corresponds to the estimated departure of from for each value of . Therefore, if is below one in the region where we expect the signal to occur, using in place of increases the sensitivity of the analysis. Conversely, if is above one outside the signal region, the use of instead of prevents from false discoveries.

Notice that in this article we only consider continuous data. In this respect, the goal is to learn the model of the background considered as a continuum and no binning is applied. Therefore, the histograms presented here are only a graphical tool used to display the data distribution and are not intended to represent an actually binning of the data.

Iv LP-based inference

When discussing the skew-G density model in (1), we have witnessed that if for all . Additionally, the graph of provides an exploratory tool to understand the nature of the deviation of from . This section introduces a novel inferential framework to test the significance of the departure of from . Specifically, we aim to test the hypotheses


First, an overall test, namely the deviance test, is presented. The deviance test assesses if deviates significantly from anywhere over the range of considered. Second, adequate confidence bands are constructed in order to assess where significant departures occur.

iv.1 The deviance test

The quantity is known as deviance LPFdr and quantifies the departure of from one. If the deviance is equal to zero, we may expect that is approximately equivalent to ; hence, we test


by means of the test statistic


From (7) it follows that, under , is asymptotically -distributed; hence, an asymptotic p-value for (17) is given by


where is the value of observed on the data.

Practical remarks. Notice that in (17) implies in (16). Similarly, in (16) implies in (17); however, the opposite is not true in general since there may be some non-zero coefficients for . Therefore, choosing small may lead to conservative, but yet valid, inference.

iv.2 Confidence bands

The estimator in (8) only accounts for the first terms of the polynomial series in (4). Therefore, is a biased estimator of . Specifically, its bias, namely , is given by


for . As a result, when (20) is large, confidence bands based on are shifted away from the true density . Despite the bias cannot be easily quantified in the general setting, it follows from (6) that, when in (16) is true, . Thus, we can exploit this property to construct reliable confidence bands under the null. Specifically, the goal is to identify , such that


where is the desired significance level.444In astrophysics, the statistical significance is often expressed in terms of number of -deviations from the mean of a standard normal, namely . For instance, a 2 significance corresponds to , where denotes the cdf of a standard normal.

If the bias determines where the confidence bands are centered, the distribution and the variance of determine their width. Under , the standard error of in (9) reduces to


Additionally, (7) implies that

is asymptotically normally distributed, hence


as , for all , under .

We can then construct confidence bands under which satisfy (21) by means of tube formulae (see [larry, , Ch.5] and PL05 ), i.e.,


where is the solutions of


with . If is within the bands in (24) over the entire range , we conclude that there is no evidence that deviates significantly from anywhere over the range considered. Conversely, we expect significant departures to occur in regions where lies outside the confidence bands.

Practical remarks. Equations (9) and (20), together, imply that, as increases, the bias of decreases while its variance inflates since more and more terms contribute to it. However, when in (16) is true, , regardless of the choice of . This implies that the confidence bands in (24) are only affected by the variance and asymptotic distribution of under .

iv.3 Choice of

The number of estimates considered determines the smoothness of , with smaller values of leading to smoother estimates. The deviance test can be used to select the value which maximizes the sensitivity of the analysis according to the following scheme:

  1. Choose a sufficiently large value .

  2. Obtain the estimates as in (5).

  3. For :

    1. calculate the deviance test p-value as in (19), i.e.,


      with .

  4. Choose such that


iv.3.1 Adjusting for post-selection

As any data-driven selection process, the scheme presented above affects the distribution of (8) and can yield to overly optimistic inference xiaotong , potscher . Despite this aspect being often ignored in practical applications, correct coverage can only be guaranteed if adequate corrections are implemented. In our setting, the number of models under comparison is typically small (); therefore, post-selection inference can be easily adjusted by means of Bonferroni’s correction bonferroni35 . Specifically, the adjusted deviance p-value is given by


where is the value selected via (27), whereas confidence bands can be adjusted by substituting in (24), with satisfying


Notice that, since Bonferroni’s correction leads to an upper bound for the overall significance, the resulting coverage will be higher than the nominal one. Alternatively, approximate post-selection confidence bands and inference can be constructed using Monte Carlo and/or resampling methods and repeating the selection process at each replicate.

Practical remarks. As noted in Section II, the estimate (8) involves the first sample moments of ; therefore, can be interpreted as the order of the highest moment which we expect to contribute in discriminating the distribution of from uniformity. As a rule of thumb, is typically chosen . Finally, Steps i-iv aim to select the approximant based on the first most significant moments, while excluding powers of higher order. A further note on model-denoising is given in Section VII.

V A data-scientific approach to signal searches

The tools presented in Sections II and IV provide a natural framework to simultaneously

  • assess the validity of the postulated background model and, if necessary, update it using the data (Section III);

  • perform signal detection on the physics sample;

  • characterize the signal when a model for it is not available.


Figure 2: Deviance test and CD plot for the source-free sample. The comparison density is estimated via (15) (solid blue line), and its standard error (light blue area) is computed according to (9). Finally, confidence bands have been constructed around one (grey areas) via (24) with replaced by in (44). The notation is used to highlight that Bonferroni’s correction has been applied to adjust for post-selection inference, leading to an increase of the nominal coverage.

Furthermore, if the model for the signal is known (up to some free parameters), it is possible to

  • further refine the background or signal distribution;

  • detect hidden signals from new unexpected sources.

Tasks (a)-(e) can be tackled in two main phases. In the first phase, the postulated background model is “calibrated” on a source-free sample in order to improve the sensitivity of the analysis and reduce the risk of false discoveries. The second phase focuses on searching for the signal of interest and involves both a nonparametric signal detection stage and a semiparametric stage for signal characterization. Both phases and respective steps are described in details below and summarized in Algorithm 1.

v.1 Background calibration

As discussed in Section III, deviations of from one suggest that a further refinement of the candidate background model is needed. However, as increases, the deviations of from one may become more and more prominent while the variance inflates. Thus, it is important to assess if such deviations are indeed significant. In order address this task, the analysis of Section III can be further refined in light of the inferential tools introduced in Section IV.

INPUTS: source-free sample ;
             postulated background distribution ;
             physics sample .
             If available: signal distribution, .
PHASE A: background calibration
  1. Estimate on and test (16) via deviance test and CD plot.

  2. if , set ;

else set . PHASE B: signal search
Stage 1: nonparametric signal detection
  • set .

  • estimate on and test (16) via deviance test and CD plot.

  • if , claim evidence in favor of the signal and go to Step 6;

else set , claim that no signal is present and stop. Stage 2: semiparametric signal characterization
  1. if given, fit in (33);

else use the CD plot of and the theory available to specify/fit a suitable model for and fit in (33).
  • estimate on and test (16) via deviance test and CD plot.

  • if , claim evidence of unexpected signal and use the CD plot of and the theory available to further investigate the nature the deviation from ;

  • else go to Step 9.
  • compute as in (34) and use it to refine or as in (35). Go back to Step 3.

  • Algorithm 1 a data-scientific signal search

    For the toy example discussed in Section III, we have seen that overestimates in the signal region and underestimates it at the higher end of the range considered (Fig. 1). We can now assess if any of these deviations are significant by implementing the deviance test in (18)-(19), whereas, to identify where the most significant departures occur, we construct confidence bands under the null model as in (24), i.e., assuming that no “update” of is necessary.

    The results are collected in the comparison density plot or CD plot presented in Fig. 2. First, a value has been selected as in (27), and the respective deviance test (adequately adjusted via Bonferroni) indicates that the deviation of from is significant at a significance level (adjusted p-value of ). Additionally, the estimated comparison density in (15) lies outside the confidence bands in the region where the signal specified in (13) is expected to occur; hence, using (14) instead of (12) is recommended in order to improve the sensibility of the analysis in the signal region.

    Important remarks on the CD plot. When comparing different models for the background or when assessing if the data distribution deviates from the model expected when no signal is present (see Section V.2.1), it is common practice to visualize the results of the analysis by superimposing the models under comparison to the histogram of the data observed on the original scale (e.g., upper panel of Fig. 1

    ). This corresponds to a data visualization in the density domain. Conversely, the CD plot (e.g., Fig.

    2) provides a representation of the data in the comparison density domain, which offers the advantage of connecting the true density of the data with the quantiles of the postulated model (see (3)). Consequently, the most substantial departures of the data distribution from the expected model are magnified, and those due to random fluctuations are smoothed out (see, also, Section V.2.2). Furthermore, the deviance tests and the CD plot together provide a powerful goodness-of-fit tool and exploratory which, conversely from classical methods such as Anderson-Darling anderson and Shapiro-Wilk shapiro , not only allow to test if the distributions under comparison differ, but they also allow to assess how and where they differ. As a result, the CD plot can be used to characterize the unknown signal distribution (see Section V.2.2) and to identify exclusion regions (e.g., Case I in Section V.2.1).

    Reliability of the calibrated background model. The size of the source-free sample plays a fundamental role in the validity of as a reliable background model. Specifically, the randomness involved in (14) only depends on the estimates. If

    is sufficiently large, by the strong law of large numbers,

    Therefore, despite the variance of becoming negligible as , one has to account for the fact that leads to a biased estimate of when (see Section IV.2). For sufficiently smooth densities, a visual inspection is often sufficient to assess if (and, consequently, ) provides a satisfactory fit for the data, whereas, for more complex distributions the effect of the bias can be mitigated considering larger values of and model-denoising (see Section VII.1).

    v.2 Signal search

    v.2.1 Nonparametric signal detection

    The background calibration phase allows the specification of a well tailored model for the background, namely , which simultaneously integrates the initial guess, , and the information carried by the source-free data sample. Hereafter, we disregard the source-free data sample and focus on analyzing the physics sample.

    Figure 3: Deviance test and CD plots for Case I where no signal is present (left panel) and Case II where the signal is present (right panel). In both cases, the postulated distribution corresponds to the cdf of the calibrated background model in (30). For the sake of comparison, has been estimated via (8) with for both samples.


    Figure 4: Histogram of a physics sample of observations from both background and signal and with pdf as in (31) (grey solid line). The true density has been estimated semiparametrically, as in (33) (pink dashed line), whereas the nonparametric estimates of have been computed as in (10), by plugging-in the estimates obtained with (blue chained line) and (black dotted line).

    Under the assumption that the source-free sample has no significant source contamination, we expect that, if the signal is absent, both the source-free and the physics sample follow the same distribution. Therefore, the calibrated background model, , plays the role of the postulated distribution for the physics sample, i.e., the model that we expect the data to follow when no signal is present; hence, we set .

    Let be the (unknown) true pdf of the physics sample which may or may not carry evidence in favor of the source of interest. When no model for the signal is specified, it is reasonable to consider any significant deviation of from as an indication that a signal of unknown nature may be present. In this setting, similarly to the background calibration phase, we can construct deviance tests and CD plots to assess if and where significant departures of from occur. Two possible scenarios are considered – a physics sample which collects only background data (Case I) and a physics sample of observations from both background and signal (Case II).

    Case I: background-only. Let be a physics sample of observations whose true (unknown) pdf is equivalent to in (11). We set


    where and are defined as in (12) and (15), respectively. The resulting CD plot and deviance test are reported in the left panel of Fig. 3.

    When applying the scheme in Section IV.3 with , none of the values of considered lead to significant results; therefore, for the sake of comparison with Case II below, we choose . Not surprisingly, the estimated comparison density approaches one over the entire range and lies entirely within the confidence bands. This suggests that the true distribution of the data does not differ significantly from the model which accounts only for the background. Similarly, the deviance test leads to very low significance (adjusted p-value ); hence, we conclude that our physics sample does not provide evidence in favor of the new source.

    Case II: background + signal. Let be a physics sample of observations whose true (unknown) pdf is equal to in (31)


    with and defined as in (11) and (13) respectively, and . The histogram of the data and the graph of are plotted in Fig. 4. As in Case I, we set as in (30).

    Figure 5: Upper panels: deviance test and CD plots for Case IIa where the signal is present (right panel), and the postulated distribution corresponds to the cdf of the estimated background+signal model in (33) with . The comparison density estimate has been obtained considering . Bottom panels: Deviance test and CD plots for Case III where, in addition to the signal of interest, an additional resonance is present. The data are first analyzed considering the background-only pdf in (12) as the postulated model (left panel). The analysis is then repeated by assuming the fitted background + signal model in (33) as the postulated distribution (right panel). Both estimates of the comparison density in the left and right panels have been computed as in (8) with .

    The CD plot and deviance test in the right panel of Fig. 3 show a significant departure of the data distribution from the background-only model in (30). The maximum significance of the deviance is achieved at

    , leading to a rejection of the null hypothesis at a

    significance level (adjusted p-value). The CD plot shows a prominent peak at the lower end of the spectrum; hence, we conclude that there is evidence in favor of the signal, and we proceed to characterize its distribution as described in Section V.2.2.

    v.2.2 Semiparametric signal characterization

    The signal detection strategy proposed in Section V.2.1 does not require the specification of a distribution for the signal. However, if a model for the signal is known (up to some free parameters), the analysis can be further refined by providing a parametric estimate of the comparison density and assessing if additional signals from new unexpected sources are present.

    Case IIa: background + (known) signal. Assume that a model for the signal, , is given, with

    being a vector of unknown parameters. Since the CD plot in the right panel of Fig.

    3 provides evidence in favor of the signal, we expect the data to be distributed according to the pdf


    where is the calibrated background distribution in (30) and and can be estimated via Maximum Likelihood (ML). Letting and be the ML estimates of and respectively, we specify


    as postulated model. For simplicity, let to be fully specified as in (13); we construct the deviance test and the CD plot to assess if (33) deviates significantly from the true distribution of the data. The scheme in Section IV.3 has been implemented with , and none of the values of considered led to significant results. The CD plot and deviance test for are reported in the upper left panel of Fig. 5. Both the large p-value of the deviance test (adjusted p-value) and the CD plot suggest that no significant deviations occur; thus, (33) is a reliable model for the physics sample.

    Furthermore, we can use (33) to further refine our or distributions. Specifically, we first construct a semiparametric estimate of , i.e.,


    and rewrite


    In the upper right panel of Fig. 5, the true comparison density (grey dashed line) for our physics sample is compared with its semiparametric estimate computed as in (34) (pink dashed line) with in (13). The graphs of two nonparametric estimates of computed via (8) with and (blue chained line and black dotted line), respectively, are added to the same plot. Not surprisingly, incorporating the information available on the signal distribution drastically improves the accuracy of the analysis. The semiparametric estimate matches almost exactly, whereas both nonparametric estimates show some discrepancies from the true comparison density. All the estimates suggest that there is only one prominent peak in correspondence of the signal region.

    When moving from the comparison density domain to the density domain in Fig. 4, the discrepancies between the nonparametric estimates and the true density are substantially magnified. Specifically, when computing (8) and (10) with (blue chained line), the height signal peak is underestimated whereas, when choosing , the exhibits high bias at the boundaries555Boundary bias is a common problem among nonparametric density estimation procedures [e.g., larry, , Ch.5, Ch.8]. When aiming for a non-parametric estimate of the data density , solutions exists to mitigate this problem [e.g., efromovich, ]. (dotted black line).

    Case IIb: background + (unknown) signal. When the signal distribution is unknown, the CD plot of can be used to guide the scientist in navigating across the different theories on the astrophysical phenomenon under study and specify a suitable model for the signal, i.e., . The model proposed can then be validated, as in Case IIa, by fitting (33) and constructing deviance tests and CD plots.

    At this stage, the scientist has the possibility to iteratively query the data and explore the distribution of the signal by assuming different models. A viable signal characterization is achieved when no significant deviations of from one are observed (e.g., see upper left panel of Fig. 5). Notice that a similar approach can be followed also in the background calibration stage (Section V.1) to provide a parametric characterization of the background distribution.

    Case III: background + (known) signal + unexpected source. The tools proposed so far can also be used to detect signals from unexpected sources whose pdfs are, by design, unknown.


    Figure 6: Deviance test and CD plot for the source-free simulated Fermi-LAT sample.


    Figure 7: Histogram of two simulated Fermi-LAT physics samples of observations. The grey histogram corresponds to the background-only sample, whereas the blue histogram corresponds to the dark matter signal sample.

    Suppose that the physics sample contains observations whose true (unknown) pdf is equal to


    where is the pdf of the unexpected signal and assume its distribution to be normal with center at 37 and width 1.8. Let and be defined as in (11) and (13), respectively, and let and .

    We can start with a nonparametric signal detection stage by setting in (30), with defined as in (13) and estimated via MLE. The respective CD plot and deviance tests are reported in the bottom left panel of Fig. 5.

    Figure 8: Deviance test and CD plots for the simulated Fermi-LAT background-only sample of size 200 (left panel) and the simulated Fermi-LAT dark matter signal sample of size 200 (right panel). In both cases, the postulated distribution corresponds to the cdf of the calibrated background model in (38). For the sake of comparison, has been estimated via (8), with in both cases.

    Choosing , as in (27), both the CD plot and deviance test indicate a significant departure from the expected background-only model and a prominent peak is observed in correspondence of the signal of interest centered around 25. A second but weaker peak appears to be right on the edge of our confidence bands, suggesting the possibility of an additional source. At this stage, if was unknown, we could proceed with a semiparametric signal characterization as in Case IIb. Whereas assuming that the distribution of the signal of interest is known and given by (13), we fit (33), aiming to capture a significant deviation in correspondence of the second bump. This is precisely what we observe in the bottom right panel of Fig. 5. Here the estimated comparison density deviates from (30) around 35, providing evidence in favor of an additional signal in this region. We can then proceed as in Case IIb by exploring the theories available and/or collecting more data to further investigate the nature and the cause of the unanticipated bump.

    Vi Background mismodelling due to instrumental noise and signal detection on small samples

    When conducting real data analyses one has to account that the data generating process is affected by both statistical and non-random uncertainty due to the instrumental noise. As a result, even when a model for the background is known, the data distribution may deviate from it due to the smearing introduced by the detector [e.g., lyonsPHY, ]. In order to account for the instrumental error affecting the data, it is common practice to consider folded distributions where the errors due to the detector are often modelled assuming a normal distribution or estimated via non-parametric methods [e.g., PHY, , PHY2, ]. Here, it is shown how the same approach described in Sections V.1 and V.2.1 can be used to account for background mismodelling due to ignoring the instrumental noise.

    The data considered come from a simulated observation by the Fermi Large Area Telescope atwood with realistic representations of the effects of the detector and present backgrounds meJINST , meMNRAS . The Fermi-LAT is a pair-conversion -ray telescope on board the earth-orbiting Fermi satellite. It measures energies and images -rays between about a 100 MeV and several TeV. The goal of the analysis is to assess if the data could result from the self-annihilation of a dark matter particle.

    First, we calibrate our “folded” background distribution over a source-free sample of 35,157 i.i.d. observations from a power-law distributed background source with index 2.4 (i.e., in (37)) and contaminated by instrumental errors of unknown distribution. Let the distribution of the astrophysical background be


    where is a normalizing constant and Giga electron Volt (GeV). We proceed by fitting (37) via maximum likelihood and setting it as postulated background distribution; this is equivalent to assuming that the effect of the detector is irrelevant. We then estimate and as in (8) and (14) respectively, with (and chosen as in Section IV.3). The deviance tests and CD plot are reported in Fig. 6. The resulting estimate for the background distribution is


    where is the cdf of (37) and is the ML estimate of in (37).

    Second, we proceed with the signal detection phase by setting . Similarly to Section V.2.1, two physics samples are given; one containing 200 observations from the background source distributed, as in (37), and the other containing 200 observations from a dark matter emission. The signal distribution from which the data have been simulated is the pdf of -ray dark matter energies in [bergstrom, , Eq. 28]. Both physics samples include the contamination due to the instrumental noise, and the respective histograms are shown in Fig. 7.

    Figure 9: Comparison of the estimators (blue solid lines) and (pink dashed lines) for the toy example in Section III and the Fermi-LAT analysis in Section VI. When available, the true comparison densities are displayed as dotted black curves.

    The selection scheme in Section IV.3 suggests that no significant departure from (38) occurs on the background-only physics sample, whereas, for the signal sample, the strongest significance is observed at ; therefore, for the sake of comparison, we choose in both cases. The respective deviance tests and CD plots are reported in Fig. 8. As expected, the left panel of Fig. 8 shows a flat estimate of the comparison density on the background-only sample. Conversely, the right panel of Fig. 8 suggests that an extra bump is present over the region with significance (adjusted p-value=). As in (34), it is possible to proceed with the signal characterization stage; however, in this setting, one has to account for the fact that also the signal distribution must include the smearing effect of the detector.

    Vii model-denoising

    When dealing with complex background distributions, a large value of may be necessary to reduce the bias of the estimated comparison density. However, since larger values of lead to rough approximation of the true density, it is often convenient to denoise the estimator in (8). Section VII.1 reviews the model-denoising approach proposed by LPapproach , LPmode , whereas Section VII.2 briefly discusses inference and model selection in this setting. Finally, Section VII.3 compares the results obtained with a full and a denoised solution on the examples in Sections V and VI.

    vii.1 AIC denoising

    Let be the estimate of the first coefficients of the expansion in (4). The most “significant” coefficients are selected by sorting the respective estimates so that

    and choosing the value for which in (39) is maximum


    The AIC-denoised estimator of is given by


    where is the estimate whose square is the largest among , is the respective shifted Legendre polynomial and


    Practical remarks. Recall that the first coefficients can be expressed as a linear combination of the first moments of . Thus, the AIC-denoising approach selects the coefficients which carry all the “sufficient” information on the first moments of the distribution.

    Method Deviance Adjusted
    selected p-values p-values
    Toy example Full
    Calibration Denoised
    Toy example M=18 Full
    Case I Denoised
    Toy example Full
    Case II Denoised
    Toy example Full
    Case III Denoised
    Fermi-LAT example Full
    Calibration Denoised
    Fermi-LAT example Full
    Bkg only sample Denoised
    Fermi-LAT example Full