DeepAI
Log In Sign Up

Hypothesis Testing For Densities and High-Dimensional Multinomials: Sharp Local Minimax Rates

06/30/2017
by   Sivaraman Balakrishnan, et al.
0

We consider the goodness-of-fit testing problem of distinguishing whether the data are drawn from a specified distribution, versus a composite alternative separated from the null in the total variation metric. In the discrete case, we consider goodness-of-fit testing when the null distribution has a possibly growing or unbounded number of categories. In the continuous case, we consider testing a Lipschitz density, with possibly unbounded support, in the low-smoothness regime where the Lipschitz parameter is not assumed to be constant. In contrast to existing results, we show that the minimax rate and critical testing radius in these settings depend strongly, and in a precise way, on the null distribution being tested and this motivates the study of the (local) minimax rate as a function of the null distribution. For multinomials the local minimax rate was recently studied in the work of Valiant and Valiant. We re-visit and extend their results and develop two modifications to the chi-squared test whose performance we characterize. For testing Lipschitz densities, we show that the usual binning tests are inadequate in the low-smoothness regime and we design a spatially adaptive partitioning scheme that forms the basis for our locally minimax optimal tests. Furthermore, we provide the first local minimax lower bounds for this problem which yield a sharp characterization of the dependence of the critical radius on the null hypothesis being tested. In the low-smoothness regime we also provide adaptive tests, that adapt to the unknown smoothness parameter. We illustrate our results with a variety of simulations that demonstrate the practical utility of our proposed tests.

READ FULL TEXT VIEW PDF
09/09/2021

Goodness-of-Fit Testing for Hölder-Continuous Densities: Sharp Local Minimax Rates

We consider the goodness-of fit testing problem for Hölder smooth densit...
01/03/2019

Minimax L_2-Separation Rate in Testing the Sobolev-Type Regularity of a function

In this paper we study the problem of testing if an L_2-function f belon...
01/09/2020

Minimax Optimal Conditional Independence Testing

We consider the problem of conditional independence testing of X and Y g...
11/06/2019

Minimax Nonparametric Two-sample Test

We consider the problem of comparing probability densities between two g...
11/28/2020

Inference in Regression Discontinuity Designs under Monotonicity

We provide an inference procedure for the sharp regression discontinuity...
12/03/2017

The local geometry of testing in ellipses: Tight control via localized Kolomogorov widths

We study the local geometry of testing a mean vector within a high-dimen...

1 Introduction

Hypothesis testing is one of the pillars of modern mathematical statistics with a vast array of scientific applications. There is a well-developed theory of hypothesis testing starting with the work of Neyman and Pearson [22], and their framework plays a central role in the theory and practice of statistics. In this paper we re-visit the classical goodness-of-fit testing problem of distinguishing the hypotheses:

(1)

for some set of distributions . This fundamental problem has been widely studied (see for instance [19] and references therein).

A natural choice of the composite alternative, one that has a clear probabilistic interpretation, excludes a total variation neighborhood around the null, i.e. we take . This is equivalent to , and we use this representation in the rest of this paper. However, there exist no consistent tests that can distinguish an arbitrary distribution from alternatives separated in ; see [17, 2]. Hence, we impose structural restrictions on and . We focus on two cases:

  1. Multinomial testing: When the null and alternate distributions are multinomials.

  2. Lipschitz testing: When the null and alternate distributions have Lipschitz densities.

The problem of goodness-of-fit testing for multinomials has a rich history in statistics and popular approaches are based on the -test [24] or the likelihood ratio test [32, 5, 22]; see, for instance, [11, 21, 9, 23, 25] and references therein. Motivated by connections to property testing [26], there is also a recent literature developing in computer science; see [13, 30, 3, 10]. Testing Lipschitz densities is one of the basic non-parametric hypothesis testing problems and tests are often based on the Kolmogorov-Smirnov or Cramér-von Mises statistics [27, 7, 31]. This problem was originally studied from the minimax perspective in the work of Ingster [15, 14]. See [14, 12, 1] for further references.

In the goodness-of-fit testing problem in (1), previous results use the (global) critical radius as a benchmark. Roughly, this global critical radius is a measure of the minimal separation between the null and alternate hypotheses that ensures distinguishability, as the null hypothesis is varied over a large class of distributions (for instance over the class of distributions with Lipschitz densities or over the class of all multinomials on categories). Remarkably, as shown in the work of Valiant and Valiant [30] for the case of multinomials and as we show in this paper for the case of Lipschitz densities, there is considerable heterogeneity in the critical radius as a function of the null distribution . In other words, even within the class of Lipschitz densities, testing certain null hypotheses can be much easier than testing others. Consequently, the local minimax rate which describes the critical radius for each individual null distribution provides a much more nuanced picture. In this paper, we provide (near) matching upper and lower bounds on the critical radii for Lipschitz testing as a function of the null distribution, i.e. we precisely upper and lower bound the critical radius for each individual Lipschitz null hypothesis. Our upper bounds are based on -type tests, performed on a carefully chosen spatially adaptive binning, and highlight the fact that the standard prescriptions of choosing bins with a fixed width [28] can yield sub-optimal tests.

The distinction between local and global perspectives is reminiscent of similar effects that arise in some estimation problems, for instance in shape-constrained inference

[4], in constrained least-squares problems [6] and in classical Fisher Information-Cramér-Rao bounds [18].

The remainder of this paper is organized as follows. In Section 2 we provide some background on the minimax perspective on hypothesis testing, and formally describe the local and global minimax rates. We provide a detailed discussion of the problem of study and finally provide an overview of our main results. In Section 3 we review the results of [30] and present a new globally-minimax test for testing multinomials, as well as a (nearly) locally-minimax test. In Section 4 we consider the problem of testing a Lipschitz density against a total variation neighbourhood. We present the body of our main technical result in Section 4.3 and defer technical aspects of this proof to the Appendix. In each of Section 3 and 4

we present simulation results that demonstrate the superiority of the tests we propose and their potential practical applicability. In the Appendix, we also present several other results including a brief study of limiting distributions of the test statistics under the null, as well as tests that are adaptive to various parameters.

2 Background and Problem Setup

We begin with some basic background on hypothesis testing, the testing risk and minimax rates, before providing a detailed treatment of some related work.

2.1 Hypothesis testing and minimax rates

Our focus in this paper is on the one sample goodness-of-fit testing problem. We observe samples , where which are independent and identically distributed with distribution . In this context, for a fixed distribution , we want to test the hypotheses:

(2)

Throughout this paper we use to denote the null distribution and to denote an arbitrary alternate distribution. Throughout the paper, we use the total variation distance (or equivalently the distance) between two distributions and , defined by

(3)

where the supremum is over all measurable sets. If and have densities and with respect to a common dominating measure , then

(4)

We consider the total variation distance because it has a clear probabilistic meaning and because it is invariant under one-to-one transformations [8]. The metric is often easier to work with but in the context of distribution testing its interpretation is less intuitive. Of course, other metrics (for instance Hellinger, or Kullback-Leibler) can be used as well but we focus on TV (or ) throughout this paper. It is well-understood [2, 17] that without further restrictions there are no uniformly consistent tests for distinguishing these hypotheses. Consequently, we focus on two restricted variants of this problem:

  1. Multinomial testing: In the multinomial testing problem, the domain of the distributions is and the distributions and

    are equivalently characterized by vectors

    . Formally, we define,

    and consider the multinomial testing problem of distinguishing:

    (5)

    In contrast to classical “fixed-cells” asymptotic theory [25], we focus on high-dimensional multinomials where can grow with, and potentially exceed the sample size .

  2. Lipschitz testing: In the Lipschitz density testing problem the set , and we restrict our attention to distributions with Lipschitz densities, i.e. letting and denote the densities of and with respect to the Lebesgue measure, we consider the set of densities:

    and consider the Lipschitz testing problem of distinguishing:

    (6)

    We emphasize, that unlike prior work [15, 1, 12] we do not require to be uniform. We also do not restrict the domain of the densities and we consider the low-smoothness regime where the Lipschitz parameter is allowed to grow with the sample size.

Hypothesis testing and risk. Returning to the setting described in (2), we define a test as a Borel measurable map, For a fixed null distribution , we define the set of level tests:

(7)

The worst-case risk (type II error) of a test

over a restricted class which contains is

The local minimax risk is111Although our proofs are explicit in their dependence on , we suppress this dependence in our notation and in our main results treating as a fixed strictly positive universal constant. :

(8)

It is common to study the minimax risk via a coarse lens by studying instead the critical radius or the minimax separation. The critical radius is the smallest value for which a hypothesis test has non-trivial power to distinguish from the set of alternatives. Formally, we define the local critical radius as:

(9)

The constant 1/2 is arbitrary; we could use any number in .

The local minimax risk and critical radius depend on the null distribution . A more common quantity of interest is the global minimax risk

(10)

The corresponding global critical radius is

(11)

In typical non-parametric problems, the local minimax risk and the global minimax risk match up to constants and this has led researchers in past work to focus on the global minimax risk. We show that for the distribution testing problems we consider, the local critical radius in (9) can vary considerably as a function of the null distribution . As a result, the global critical radius, provides only a partial understanding of the intrinsic difficulty of this family of hypothesis testing problems. In this paper, we focus on producing tight bounds on the local minimax separation. These bounds yield as a simple corollary, sharp bounds on the global minimax separation, but are in general considerably more refined.

Poissonization: In constructing upper bounds on the minimax risk—we work under a simplifying assumption that the sample size is random: . This assumption is standard in the literature [30, 1], and simplifies several calculations. When the sample size is chosen to be distributed as , it is straightforward to verify that for any fixed set with , under the number of samples falling in and are distributed independently as and respectively.

In the Poissonized setting, we consider the averaged minimax risk, where we additionally average the risk in (8

) over the random sample size. The Poisson distribution is tightly concentrated around its mean and this additional averaging only affects constant factors in the minimax risk and we ignore this averaging in the rest of the paper.

2.2 Overview of our results

With the basic framework in place we now provide a high-level overview of the main results of this paper. In the context of testing multinomials, the results of [30] characterize the local and global minimax rates. We provide the following additional results:

  • In Theorem 2 we characterize a simple and practical globally minimax test. In Theorem 4 building on the results of [10] we provide a simple (near) locally minimax test.

In the context of testing Lipschitz densities we make advances over classical results [14, 12] by eliminating several unnecessary assumptions (uniform null, bounded support, fixed Lipschitz parameter). We provide the first characterization of the local minimax rate for this problem. In studying the Lipschitz testing problem in its full generality we find that the critical testing radius can exhibit a wide range of possible behaviours, based roughly on the tail behaviour of the null hypothesis.

  • In Theorem 5 we provide a characterization of the local minimax rate for Lipschitz density testing. In Section 4.1, we consider a variety of concrete examples that demonstrate the rich scaling behaviour exhibited by the critical radius in this problem.

  • Our upper and lower bounds are based on a novel spatially adaptive partitioning scheme. We describe this scheme and derive some of its useful properties in Section 4.2.

In the Supplementary Material we provide the technical details of the proofs. We briefly consider the limiting behaviour of our test statistics under the null in Appendix A. Our results show that the critical radius is determined by a certain functional of the null hypothesis. In Appendix D we study certain important properties of this functional pertaining to its stability. Finally, we study tests which are adaptive to various parameters in Appendix F.

3 Testing high-dimensional multinomials

Given a sample define the counts where . The local minimax critical radii for the multinomial problem have been found in Valiant and Valiant [30]. We begin by summarizing these results.

Without loss of generality we assume that the entries of the null multinomial are sorted so that . For any we denote -tail of the multinomial by:

(12)

The -bulk is defined to be

(13)

Note that is excluded from the -bulk. The minimax rate depends on the functional:

(14)

For a given multinomial , our goal is to upper and lower bound the local critical radius in (9). We define, and to be the solutions to the equations 222These equations always have a unique solution since the right hand side monotonically decreases to as the left hand side monotonically increases from 0 to 1.:

(15)

With these definitions in place, we are now ready to state the result of [30]. We use to denote positive universal constants.

Theorem 1 ([30]).

The local critical radius for multinomial testing is upper and lower bounded as:

(16)

Furthermore, the global critical radius is bounded as:

Remarks:

  • The local critical radius is roughly determined by the (truncated) 2/3-rd norm of the multinomial . This norm is maximized when is uniform and is small when is sparse, and at a high-level captures the “effective sparsity” of .

  • The global critical radius can shrink to zero even when . When almost all categories of the multinomial are unobserved but it is still possible to reliably distinguish any from an -neighborhood. This phenomenon is noted for instance in the work of [23]. We also note the work of Barron [2] that shows that when , no test can have power that approaches at an exponential rate.

  • The local critical radius can be much smaller than the global minimax radius. If the multinomial is nearly (or exactly) -sparse then the critical radius is upper and lower bounded up to constants by . Furthermore, these results also show that it is possible to design consistent tests for sufficiently structured null hypotheses: in cases when and even in cases when is infinite.

  • Except for certain pathological multinomials, the upper and lower critical radii match up to constants. We revisit this issue in Appendix D, in the context of Lipschitz densities, where we present examples where the solutions to critical equations similar to (15) are stable and examples where they are unstable.

In the remainder of this section we consider a variety of tests, including the test presented in [30] and several alternatives. The test of [30] is a composite test that requires knowledge of and the analysis of their test is quite intricate. We present an alternative, simple test that is globally minimax, and then present an alternative composite test that is locally minimax but simpler to analyze. Finally, we present a few illustrative simulations.

3.1 The truncated test

We begin with a simple globally minimax test. From a practical standpoint, the most popular test for multinomials is Pearson’s test. However, in the high-dimensional regime where the dimension of the multinomial is not treated as fixed the

test can have bad power due to the fact that the variance of the

statistic is dominated by small entries of the multinomial (see [30, 20]).

A natural thought then is to truncate the normalization factors of the statistic in order to limit the contribution to the variance from each cell of the multinomial. Recalling that denote the observed counts, we propose the test statistic:

(17)

and the corresponding test,

(18)

This test statistic truncates the usual normalization factor for the test for any entry which falls below , and thus ensures that very small entries in do not have a large effect on the variance of the statistic. We emphasize the simplicity and practicality of this test. We have the following result which bounds the power and size of the truncated test. We use to denote a positive universal constant.

Theorem 2.

Consider the testing problem in (5). The truncated test has size at most , i.e. Furthermore, there is a universal constant such that if for any we have that,

(19)

then the Type II error of the test is bounded by , i.e.

Remarks:

  • A straightforward consequence of this result together with the result in Theorem 1 is that the truncated test is globally minimax optimal.

  • The classical and likelihood ratio tests are not generally consistent (and thus not globally minimax optimal) in the high-dimensional regime (see also, Figure 2).

  • At a high-level the proof follows by verifying that when the alternate hypothesis is true, under the condition on the critical radius in (19), the test statistic is larger than the threshold in (18). To verify this, we lower bound the mean and upper bound the variance of the test statistic under the alternate and then use standard concentration results. We defer the details to the Supplementary Material (Appendix B).

3.2 The -rd + tail test

The truncated test described in the previous section, although globally minimax, is not locally adaptive. The test from [30], achieves the local minimax upper bound in Theorem 1. We refer to this as the -rd + tail test. We use a slightly modified version of their test when testing Lipschitz goodness-of-fit in Section 4, and provide a description here.

The test is a composite two-stage test, and has a tuning parameter . Recalling the definitions of and (see (12)), we define two test statistics and corresponding test thresholds :

We define two tests:

  1. The tail test:

  2. The 2/3-test: .

The composite test is then given as:

(20)

With these definitions in place, the following result is essentially from the work of [30]. We use to denote a positive universal constant.

Theorem 3.

Consider the testing problem in (5). The composite test has size at most , i.e. Furthermore, if we choose , and as in (15), then for any , if

(21)

then the Type II error of the test is bounded by , i.e.

Remarks:

  • The test is also motivated by deficiencies of the test. In particular, the test includes two main modifications to the test which limit the contribution of the small entries of : some of the small entries of are dealt with via a separate tail test and further the normalization of the test is changed from to .

  • This result provides the upper bound of Theorem 1. It requires that the tuning parameter is chosen as . In the Supplementary Material (Appendix F) we discuss adaptive choices for .

  • The proof essentially follows from the paper of [30], but we maintain an explicit bound on the power and size of the test, which we use in later sections. We provide the details in Appendix B.

While the 2/3-rd norm test is locally minimax optimal its analysis is quite challenging. In the next section, we build on results from a recent paper of Diakonikolas and Kane [10] to provide an alternative (nearly) locally minimax test with a simpler analysis.

3.3 The Max Test

An important insight, one that is seen for instance in Figure 1, is that many simple tests are optimal when is uniform and that careful modifications to the test are required only when is far from uniform. This suggests the following strategy: partition the multinomial into nearly uniform groups, apply a simple test within each group and combine the tests with an appropriate Bonferroni correction. We refer to this as the max test. Such a strategy was used by Diakonikolas and Kane [10]

, but their test is quite complicated and involves many constants. Furthermore, it is not clear how to ensure that their test controls the Type I error at level

. Motivated by their approach, we present a simple test that controls the type I error as required and is (nearly) locally minimax.

As with the test in the previous section, the test has to be combined with the tail test. The test is defined to be

where is defined as follows. We partition into sets for , where

We can bound the total number of sets by noting that for any , we have that , so that the number of sets is bounded by . Within each set we use a modified statistic. Let

(22)

for . Unlike the traditional statistic, each term in this statistic is centered around . As observed in [30], this results in the statistic having smaller variance in the regime. Let

(23)

where

(24)
Theorem 4.

Consider the testing problem in (5). Suppose we choose , then the composite test has size at most , i.e. Furthermore, there is a universal constant , such that for as in (15), if for any we have that,

(25)

then the Type II error of the test is bounded by , i.e.

Remarks:

  • Comparing the critical radii in Equations (25) and (16), and noting that we conclude that the max test is locally minimax optimal, up to a logarithmic factor.

  • In contrast to the analysis of the 2/3-rd + tail test in [30], the analysis of the max test involves mostly elementary calculations. We provide the details in Appendix B. As emphasized in the work of [10], the reduction of testing problems to simpler testing problems (in this case, testing uniformity) is a more broadly useful idea. Our upper bound for the Lipschitz testing problem (in Section 4) proceeds by reducing it to a multinomial testing problem through a spatially adaptive binning scheme.

3.4 Simulations

In this section, we report some simulation results that demonstrate the practicality of the proposed tests. We focus on two simulation scenarios and compare the globally-minimax truncated test, and the 2/3rd + tail test [30] with the classical -test and the likelihood ratio test. The statistic is,

and the likelihood ratio test statistic is

In Appendix G, we consider a few additional simulations as well as a comparison with statistics based on the and distances.

In each setting described below, we set the level threshold via simulation (by sampling from the null 1000 times) and we calculate the power under particular alternatives by averaging over a 1000 trials.

     
Figure 1: A comparison between the truncated test, the 2/3rd + tail test [30], the -test and the likelihood ratio test. The null is chosen to be uniform, and the alternate is either a dense or sparse perturbation of the null. The power of the tests are plotted against the distance between the null and alternate. Each point in the graph is an average over 1000 trials. Despite the high-dimensionality (i.e. ) the tests have high-power, and perform comparably.
  1. Figure 1 considers a high-dimensional setting where , the null distribution is uniform, and the alternate is either dense (perturbing each coordinate by a scaled Rademacher) or sparse (perturbing only two coordinates).

    In each case we observe that all the tests perform comparably indicating that a variety of tests are optimal around the uniform distribution, a fact that we exploit in designing the max test. The test from

    [30] performs slightly worse than others due to the Bonferroni correction from applying a two-stage test.

  2. Figure 2 considers a power-law null where . Again we take , and compare against a dense and sparse alternative. In this setting, we choose the sparse alternative to only perturb the first two coordinates of the distribution.

    We observe two notable effects. First, we see that when the alternate is dense, the truncated test, although consistent in the high-dimensional regime, is outperformed by the other tests highlighting the need to study the local-minimax properties of tests. Perhaps more surprising is that in the setting where the alternate is sparse, the classical and likelihood ratio tests can fail dramatically.

The locally minimax test is remarkably robust across simulation settings. However, it requires that we specify , a drawback shared by the max test. In Appendix F we provide adaptive alternatives that adapt to unknown .

     
Figure 2: A comparison between the truncated test, the 2/3rd + tail test [30], the -test and the likelihood ratio test. The null is chosen to be a power law, and the alternate is either a dense or sparse perturbation of the null. The power of the tests are plotted against the distance between the null and alternate. Each point in the graph is an average over 1000 trials. The truncated test despite being globally minimax optimal, can perform poorly for any particular fixed null. The and likelihood ratio tests can fail to be consistent even when is quite large, and .

4 Testing Lipschitz Densities

In this section, we focus our attention on the Lipschitz testing problem (6). As is standard in non-parametric problems, throughout this section, we treat the dimension as a fixed (universal) constant. Our emphasis is on understanding the local critical radius while making minimal assumptions. In contrast to past work, we do not assume that the null is uniform or even that its support is compact. We would like to be able to detect more subtle deviations from the null as the sample size gets large and hence we do not assume that the Lipschitz parameter is fixed as grows.

The classical method, due to Ingster [15, 16] to constructing lower and upper bounds on the critical radius, is based on binning the domain of the density. In particular, upper bounds were obtained by considering tests applied to the multinomial that results from binning the null distribution. Ingster focused on the case when the null distribution was taken to be uniform on

, noting that the testing problem for a general null distribution could be “reduced” to testing uniformity by modifying the observations via the quantile transformation corresponding to the null distribution

(see also [12]). We emphasize that such a reduction alters the smoothness class tailoring it to the null distribution . The quantile transformation forces the deviations from the null distribution to be more smooth in regions where is small and less smooth where is large, i.e. we need to re-interpret smoothness of the alternative density as an assumption about the function where is the quantile function of the null distribution . We find this assumption to be unnatural and instead aim to directly test the hypotheses in (6).

We begin with some high-level intuition for our upper and lower bounds.

  • Upper bounding the critical radius: The strategy of binning domain of , and then testing the resulting multinomial against an appropriate neighborhood using a locally minimax test is natural even when is not uniform. However, there is considerable flexibility in how precisely to bin the domain of . Essentially, the only constraint in the choice of bin-widths is that the approximation error (of approximating the density by its piecewise constant, histogram approximation) is sufficiently well-controlled. When the null is not uniform the choice of fixed bin-widths is arbitrary and as we will see, sub-optimal. A bulk of the technical effort in constructing our optimal tests is then in determining the optimal inhomogenous, spatially adaptive partition of the domain in order to apply a multinomial test.

  • Lower bounding the critical radius: At a high-level the construction of Ingster is similar to standard lower bounds in non-parametric problems. Roughly, we create a collection of possible alternate densities, by evenly partitioning the domain of , and then perturbing each cell of the partition by adding or subtracting a small (sufficiently smooth) bump. We then analyze the optimal likelihood ratio test for the (simple versus simple) testing problem of distinguishing from a uniform mixture of the set of possible alternate densities. We observe that when is not uniform once again creating a fixed bin-width partition is not optimal. The optimal choice of bin-widths is to choose larger bin-widths when is large and smaller bin-widths when is small. Intuitively, this choice allows us to perturb the null distribution more when the density is large, without violating the constraint that the alternate distribution remain sufficiently smooth. Once again, we create an inhomogenous, spatially adaptive partition of the domain, and then use this partition to construct the optimal perturbation of the null.

Define,

(26)

and for any

denote the collection of sets of probability mass at least

as , i.e. Define the functional,

(27)

We refer to this as the truncated -functional333Although the set that achieves the minimum in the definition of need not be unique, the functional itself is well-defined.. The functional is the analog of the functional in (14), and roughly characterizes the local critical radius. We return to study this functional in light of several examples, in Section 4.1 (and Appendix D).

In constructing lower bounds we will assume that the null density lies in the interior of the Lipschitz ball, i.e. we assume that for some constant , we have that, This assumption avoids certain technical issues that arise in creating perturbations of the null density when it lies on the boundary of the Lipschitz ball.

Finally, we define for two universal constants (that are explicit in our proofs) the upper and lower critical radii:

(28)

With these preliminaries in place we now state our main result on testing Lipschitz densities. We let denote two positive universal constants (different from the ones above).

Theorem 5.

The local critical radius for testing Lipschitz densities is upper bounded as:

(29)

Furthermore, if for some constant we have that, then the critical radius is lower bounded as

(30)

Remarks:

  • A natural question of interest is to understand the worst-case rate for the critical radius, or equivalently to understand the largest that the -functional can be. Since the -functional can be infinite if the support is unrestricted, we restrict our attention to Lipschitz densities with a bounded support . In this case, letting denote the Lebesgue measure of and using Hölder’s inequality (see Appendix D) we have that for any ,

    (31)

    Up to constants involving this is attained when is uniform on the set . In other words, the critical radius is maximal for testing the uniform density against a Lipschitz, neighborhood. In this case, we simply recover a generalization of the result of [15] for testing when is uniform on .

  • The main discrepancy between the upper and lower bounds is in the truncation level, i.e. the upper and lower bounds depend on the functional for different values of the parameter . This is identical to the situation in Theorem 1 for testing multinomials. In most non-pathological examples this functional is stable with respect to constant factor discrepancies in the truncation level and consequently our upper and lower bounds are typically tight (see the examples in Section 4.1). In the Supplementary Material (see Appendix D) we formally study the stability of the -functional. We provide general bounds and relate the stability of the -functional to the stability of the level-sets of .

The remainder of this section is organized as follows: we first consider various examples, calculate the -functional and develop the consequences of Theorem 5 for these examples. We then turn our attention to our adaptive binning, describing both a recursive partitioning algorithm for constructing it as well as developing some of its useful properties. Finally, we provide the body of our proof of Theorem 5 and defer more technical aspects to the Supplementary Material. We conclude with a few illustrative simulations.

4.1 Examples

The result in Theorem 5 provides a general characterization of the critical radius for testing any density , against a Lipschitz, neighborhood. In this section we consider several concrete examples. Although our theorem is more generally applicable, for ease of exposition we focus on the setting where , highlighting the variability of the -functional and consequently of the critical radius as the null density is changed. Our examples have straightforward -dimensional extensions.

When , we have that so the -functional is simply:

where is as before. Our interest in general is in the setting where (which happens as ), so in some examples we will simply calculate . In other examples however, the truncation at level will play a crucial role and in those cases we will compute .

Example 1 (Uniform null).

Suppose that the null distribution is Uniform then,

Example 2 (Gaussian null).

Suppose that the null distribution is a Gaussian, i.e. for some ,

In this case, a simple calculation (see Appendix C) shows that,

Example 3 (Beta null).

Suppose that the null density is a Beta distribution:

where and denote the gamma and beta functions respectively. It is easy to verify that,

To get some sense of the behaviour of this functional, we consider the case when In this case, we show (see Appendix C) that for ,

In particular, we have that .

Remark:

  • These examples illustrate that in the simplest settings when the density is close to uniform, the -functional is roughly the effective support of . In each of these cases, it is straightforward to verify that the truncation of the -functional simply affects constants so that the critical radius scales as:

    where in each case scales as roughly the size of the -support of the density , i.e. as the Lebesgue measure of the smallest set that contains probability mass. This motivates understanding the Lipschitz density with smallest effective support, and we consider this next.

Example 4 (Spiky null).

Suppose that the null hypothesis is:

then we have that

Remark:

  • For the spiky null distribution we obtain an extremely fast rate, i.e. we have that the critical radius and is independent of the Lipschitz parameter (although, we note that the null is more spiky as increases). This is the fastest rate we obtain for Lipschitz testing. In settings where the tail decay is slow, the truncation of the -functional can be crucial and the rates can be much slower. We consider these examples next.

Example 5 (Cauchy distribution).

The mean zero, Cauchy distribution with parameter

has pdf:

As we show (see Appendix C), the -functional without truncation is infinite, i.e. . However, the truncated -functional is finite. In the Supplementary Material we show that for any (recall that our interest is in cases where ),

i.e. we have that .

Remark:

  • When the null distribution is Cauchy as above, we note that the rate for the critical radius is no longer the typical even when the other problem specific parameters ( and the Cauchy parameter ) are held fixed. We instead obtain a slower rate. Our final example, shows that we can obtain an entire spectrum of slower rates.

Example 6 (Pareto null).

For a fixed and for , suppose that the null distribution is

This distribution for has thicker tails than the Cauchy distribution. The -functional without truncation is infinite, i.e. , and we can further show that (see Appendix C):

In the regime of interest when , we have that

Remark:

  • We observe that once again, the critical radius no longer follows the typical rate: Instead we obtain the rate, and indeed have much slower rates as , indicating the difficulty of testing heavy-tailed distributions against a Lipschitz, neighborhood.

We conclude this section by emphasizing the value of the local minimax perspective and of studying the goodness-of-fit problem beyond the uniform null. We are able to provide a sharp characterization of the critical radius for a broad class of interesting examples, and we obtain faster (than at uniform) rates when the null is spiky and non-standard rates in cases when the null is heavy-tailed.

4.2 A recursive partitioning scheme

At the heart of our upper and lower bounds are spatially adaptive partitions of the domain of . The partitions used in our upper and lower bounds are similar but not identical. In this section, we describe an algorithm for producing the desired partitions and then briefly describe some of the main properties of the partition that we leverage in our upper and lower bounds.

We begin by describing the desiderata for the partition from the perspective of the upper bound. Our goal is to construct a test for the hypotheses in (6), and we do so by constructing a partition (consisting of cells) of . Each cell for will be a cube, while the cell will be arbitrary but will have small total probability content. We let,

(32)

We form the multinomial corresponding to the partition where . We then test this multinomial using the counts of the number of samples falling in each cell of the partition.

Requirement 1: A basic requirement of the partition is that it must ensure that a density that is at least far away in distance from should remain roughly away from when converted to a multinomial. Formally, for any such that we require that for some small constant ,

(33)

Of course, there are several ways to ensure this condition is met. In particular, supposing that we restrict attention to densities supported on then it suffices for instance to choose roughly even-width bins. This is precisely the partition considered in prior work [15, 16, 1]. When we do not restrict attention to compactly supported, uniform densities an even-width partition is no longer optimal and a careful optimization of the upper and lower bounds with respect to the partition yields the optimal choice. The optimal partition has bin-widths that are roughly taken proportional to , where the constant of proportionality is chosen to ensure that the condition in (33) is satisfied. Precisely determining the constant of proportionality turns out to be quite subtle so we defer a discussion of this to the end of this section.

Requirement 2: A second requirement that arises in both our upper and lower bounds is that the cells of our partition (except ) are not chosen too wide. In particular, we must choose the cells small enough to ensure that the density is roughly constant on each cell, i.e. on each cell we need that for any ,

(34)

Using the Lipschitz property of , this condition is satisfied if any point is in a cell of diameter at most .

Taken together the first two requirements suggest that we need to create a partition such that: for every point the diameter of the cell containing the point , should be roughly,

where is to be chosen to be smaller than , and is chosen to ensure that Requirement 1 is satisfied.

Algorithm 1 constructively establishes the existence of a partition satisfying these requirements. The upper and lower bounds use this algorithm with slightly different parameters. The key idea is to recursively partition cells that are too large by halving each side. This is illustrated in Figure 3. The proof of correctness of the algorithm uses the smoothness of in an essential fashion. Indeed, were the density not sufficiently smooth then such a partition would likely not exist.

In order to ensure that the algorithm has a finite termination, we choose two parameters (these are chosen sufficiently small to not affect subsequent results):

  • We restrict our attention to the -effective support of , i.e. we define to be the smallest cube centered at the mean of such that, We begin with .

  • If the density in any cell is sufficiently small we do not split the cell further, i.e. for a parameter , if then we do not split it, rather we add it to . By construction, such cells have total probability content at most .

For each cube for we let denote its centroid, and we let denote the number of cubes created by Algorithm 1.

  1. [leftmargin=0cm]

  2. Input: Parameters

  3. Set and .

  4. For each cube do:

    • If

      (35)

      then remove from the partition and let .

    • If

      (36)

      then do nothing to .

    • If fails to satisfy (35) or (36) then replace by a set of cubes that are obtained by halving the original along each of its axes.

  5. If no cubes are split or removed, STOP. Else go to step 3.

  6. Output: Partition .

Algorithm 1 Adaptive Partition
(a)
(b)
Figure 3: (a) A density on evaluated on a grid. (b) The corresponding spatially adaptive partition produced by Algorithm 1. Cells of the partition are larger in regions where the density is higher.

Requirement 3: The final major requirement is two-fold: (1) we require that the -norm of the density over the support of the partition should be upper bounded by the truncated -functional, and (2) that the density over the cells of the partition be sufficiently large. This necessitates a further pruning of the partition, where we order cells by their probability content and successively eliminate (adding them to ) cells of low probability until we have eliminated mass that is close to the desired truncation level. This is accomplished by Algorithm 2.

  1. [leftmargin=0cm]

  2. Input: Unpruned partition and a target pruning level Without loss of generality we assume

  • For any let . Let denote the smallest positive integer such that,

  • If :

    • Set and .

  • If :

    • Set and .