1 Introduction
Hypothesis testing is one of the pillars of modern mathematical statistics with a vast array of scientific applications. There is a welldeveloped theory of hypothesis testing starting with the work of Neyman and Pearson [22], and their framework plays a central role in the theory and practice of statistics. In this paper we revisit the classical goodnessoffit testing problem of distinguishing the hypotheses:
(1) 
for some set of distributions . This fundamental problem has been widely studied (see for instance [19] and references therein).
A natural choice of the composite alternative, one that has a clear probabilistic interpretation, excludes a total variation neighborhood around the null, i.e. we take . This is equivalent to , and we use this representation in the rest of this paper. However, there exist no consistent tests that can distinguish an arbitrary distribution from alternatives separated in ; see [17, 2]. Hence, we impose structural restrictions on and . We focus on two cases:

Multinomial testing: When the null and alternate distributions are multinomials.

Lipschitz testing: When the null and alternate distributions have Lipschitz densities.
The problem of goodnessoffit testing for multinomials has a rich history in statistics and popular approaches are based on the test [24] or the likelihood ratio test [32, 5, 22]; see, for instance, [11, 21, 9, 23, 25] and references therein. Motivated by connections to property testing [26], there is also a recent literature developing in computer science; see [13, 30, 3, 10]. Testing Lipschitz densities is one of the basic nonparametric hypothesis testing problems and tests are often based on the KolmogorovSmirnov or Cramérvon Mises statistics [27, 7, 31]. This problem was originally studied from the minimax perspective in the work of Ingster [15, 14]. See [14, 12, 1] for further references.
In the goodnessoffit testing problem in (1), previous results use the (global) critical radius as a benchmark. Roughly, this global critical radius is a measure of the minimal separation between the null and alternate hypotheses that ensures distinguishability, as the null hypothesis is varied over a large class of distributions (for instance over the class of distributions with Lipschitz densities or over the class of all multinomials on categories). Remarkably, as shown in the work of Valiant and Valiant [30] for the case of multinomials and as we show in this paper for the case of Lipschitz densities, there is considerable heterogeneity in the critical radius as a function of the null distribution . In other words, even within the class of Lipschitz densities, testing certain null hypotheses can be much easier than testing others. Consequently, the local minimax rate which describes the critical radius for each individual null distribution provides a much more nuanced picture. In this paper, we provide (near) matching upper and lower bounds on the critical radii for Lipschitz testing as a function of the null distribution, i.e. we precisely upper and lower bound the critical radius for each individual Lipschitz null hypothesis. Our upper bounds are based on type tests, performed on a carefully chosen spatially adaptive binning, and highlight the fact that the standard prescriptions of choosing bins with a fixed width [28] can yield suboptimal tests.
The distinction between local and global perspectives is reminiscent of similar effects that arise in some estimation problems, for instance in shapeconstrained inference
[4], in constrained leastsquares problems [6] and in classical Fisher InformationCramérRao bounds [18].The remainder of this paper is organized as follows. In Section 2 we provide some background on the minimax perspective on hypothesis testing, and formally describe the local and global minimax rates. We provide a detailed discussion of the problem of study and finally provide an overview of our main results. In Section 3 we review the results of [30] and present a new globallyminimax test for testing multinomials, as well as a (nearly) locallyminimax test. In Section 4 we consider the problem of testing a Lipschitz density against a total variation neighbourhood. We present the body of our main technical result in Section 4.3 and defer technical aspects of this proof to the Appendix. In each of Section 3 and 4
we present simulation results that demonstrate the superiority of the tests we propose and their potential practical applicability. In the Appendix, we also present several other results including a brief study of limiting distributions of the test statistics under the null, as well as tests that are adaptive to various parameters.
2 Background and Problem Setup
We begin with some basic background on hypothesis testing, the testing risk and minimax rates, before providing a detailed treatment of some related work.
2.1 Hypothesis testing and minimax rates
Our focus in this paper is on the one sample goodnessoffit testing problem. We observe samples , where which are independent and identically distributed with distribution . In this context, for a fixed distribution , we want to test the hypotheses:
(2) 
Throughout this paper we use to denote the null distribution and to denote an arbitrary alternate distribution. Throughout the paper, we use the total variation distance (or equivalently the distance) between two distributions and , defined by
(3) 
where the supremum is over all measurable sets. If and have densities and with respect to a common dominating measure , then
(4) 
We consider the total variation distance because it has a clear probabilistic meaning and because it is invariant under onetoone transformations [8]. The metric is often easier to work with but in the context of distribution testing its interpretation is less intuitive. Of course, other metrics (for instance Hellinger, or KullbackLeibler) can be used as well but we focus on TV (or ) throughout this paper. It is wellunderstood [2, 17] that without further restrictions there are no uniformly consistent tests for distinguishing these hypotheses. Consequently, we focus on two restricted variants of this problem:

Multinomial testing: In the multinomial testing problem, the domain of the distributions is and the distributions and
are equivalently characterized by vectors
. Formally, we define,and consider the multinomial testing problem of distinguishing:
(5) In contrast to classical “fixedcells” asymptotic theory [25], we focus on highdimensional multinomials where can grow with, and potentially exceed the sample size .

Lipschitz testing: In the Lipschitz density testing problem the set , and we restrict our attention to distributions with Lipschitz densities, i.e. letting and denote the densities of and with respect to the Lebesgue measure, we consider the set of densities:
and consider the Lipschitz testing problem of distinguishing:
(6) We emphasize, that unlike prior work [15, 1, 12] we do not require to be uniform. We also do not restrict the domain of the densities and we consider the lowsmoothness regime where the Lipschitz parameter is allowed to grow with the sample size.
Hypothesis testing and risk. Returning to the setting described in (2), we define a test as a Borel measurable map, For a fixed null distribution , we define the set of level tests:
(7) 
The worstcase risk (type II error) of a test
over a restricted class which contains isThe local minimax risk is^{1}^{1}1Although our proofs are explicit in their dependence on , we suppress this dependence in our notation and in our main results treating as a fixed strictly positive universal constant. :
(8) 
It is common to study the minimax risk via a coarse lens by studying instead the critical radius or the minimax separation. The critical radius is the smallest value for which a hypothesis test has nontrivial power to distinguish from the set of alternatives. Formally, we define the local critical radius as:
(9) 
The constant 1/2 is arbitrary; we could use any number in .
The local minimax risk and critical radius depend on the null distribution . A more common quantity of interest is the global minimax risk
(10) 
The corresponding global critical radius is
(11) 
In typical nonparametric problems, the local minimax risk and the global minimax risk match up to constants and this has led researchers in past work to focus on the global minimax risk. We show that for the distribution testing problems we consider, the local critical radius in (9) can vary considerably as a function of the null distribution . As a result, the global critical radius, provides only a partial understanding of the intrinsic difficulty of this family of hypothesis testing problems. In this paper, we focus on producing tight bounds on the local minimax separation. These bounds yield as a simple corollary, sharp bounds on the global minimax separation, but are in general considerably more refined.
Poissonization: In constructing upper bounds on the minimax risk—we work under a simplifying assumption that the sample size is random: . This assumption is standard in the literature [30, 1], and simplifies several calculations. When the sample size is chosen to be distributed as , it is straightforward to verify that for any fixed set with , under the number of samples falling in and are distributed independently as and respectively.
In the Poissonized setting, we consider the averaged minimax risk, where we additionally average the risk in (8
) over the random sample size. The Poisson distribution is tightly concentrated around its mean and this additional averaging only affects constant factors in the minimax risk and we ignore this averaging in the rest of the paper.
2.2 Overview of our results
With the basic framework in place we now provide a highlevel overview of the main results of this paper. In the context of testing multinomials, the results of [30] characterize the local and global minimax rates. We provide the following additional results:
In the context of testing Lipschitz densities we make advances over classical results [14, 12] by eliminating several unnecessary assumptions (uniform null, bounded support, fixed Lipschitz parameter). We provide the first characterization of the local minimax rate for this problem. In studying the Lipschitz testing problem in its full generality we find that the critical testing radius can exhibit a wide range of possible behaviours, based roughly on the tail behaviour of the null hypothesis.

Our upper and lower bounds are based on a novel spatially adaptive partitioning scheme. We describe this scheme and derive some of its useful properties in Section 4.2.
In the Supplementary Material we provide the technical details of the proofs. We briefly consider the limiting behaviour of our test statistics under the null in Appendix A. Our results show that the critical radius is determined by a certain functional of the null hypothesis. In Appendix D we study certain important properties of this functional pertaining to its stability. Finally, we study tests which are adaptive to various parameters in Appendix F.
3 Testing highdimensional multinomials
Given a sample define the counts where . The local minimax critical radii for the multinomial problem have been found in Valiant and Valiant [30]. We begin by summarizing these results.
Without loss of generality we assume that the entries of the null multinomial are sorted so that . For any we denote tail of the multinomial by:
(12) 
The bulk is defined to be
(13) 
Note that is excluded from the bulk. The minimax rate depends on the functional:
(14) 
For a given multinomial , our goal is to upper and lower bound the local critical radius in (9). We define, and to be the solutions to the equations ^{2}^{2}2These equations always have a unique solution since the right hand side monotonically decreases to as the left hand side monotonically increases from 0 to 1.:
(15) 
With these definitions in place, we are now ready to state the result of [30]. We use to denote positive universal constants.
Theorem 1 ([30]).
The local critical radius for multinomial testing is upper and lower bounded as:
(16) 
Furthermore, the global critical radius is bounded as:
Remarks:

The local critical radius is roughly determined by the (truncated) 2/3rd norm of the multinomial . This norm is maximized when is uniform and is small when is sparse, and at a highlevel captures the “effective sparsity” of .

The global critical radius can shrink to zero even when . When almost all categories of the multinomial are unobserved but it is still possible to reliably distinguish any from an neighborhood. This phenomenon is noted for instance in the work of [23]. We also note the work of Barron [2] that shows that when , no test can have power that approaches at an exponential rate.

The local critical radius can be much smaller than the global minimax radius. If the multinomial is nearly (or exactly) sparse then the critical radius is upper and lower bounded up to constants by . Furthermore, these results also show that it is possible to design consistent tests for sufficiently structured null hypotheses: in cases when and even in cases when is infinite.

Except for certain pathological multinomials, the upper and lower critical radii match up to constants. We revisit this issue in Appendix D, in the context of Lipschitz densities, where we present examples where the solutions to critical equations similar to (15) are stable and examples where they are unstable.
In the remainder of this section we consider a variety of tests, including the test presented in [30] and several alternatives. The test of [30] is a composite test that requires knowledge of and the analysis of their test is quite intricate. We present an alternative, simple test that is globally minimax, and then present an alternative composite test that is locally minimax but simpler to analyze. Finally, we present a few illustrative simulations.
3.1 The truncated test
We begin with a simple globally minimax test. From a practical standpoint, the most popular test for multinomials is Pearson’s test. However, in the highdimensional regime where the dimension of the multinomial is not treated as fixed the
test can have bad power due to the fact that the variance of the
statistic is dominated by small entries of the multinomial (see [30, 20]).A natural thought then is to truncate the normalization factors of the statistic in order to limit the contribution to the variance from each cell of the multinomial. Recalling that denote the observed counts, we propose the test statistic:
(17) 
and the corresponding test,
(18) 
This test statistic truncates the usual normalization factor for the test for any entry which falls below , and thus ensures that very small entries in do not have a large effect on the variance of the statistic. We emphasize the simplicity and practicality of this test. We have the following result which bounds the power and size of the truncated test. We use to denote a positive universal constant.
Theorem 2.
Consider the testing problem in (5). The truncated test has size at most , i.e. Furthermore, there is a universal constant such that if for any we have that,
(19) 
then the Type II error of the test is bounded by , i.e.
Remarks:

A straightforward consequence of this result together with the result in Theorem 1 is that the truncated test is globally minimax optimal.

The classical and likelihood ratio tests are not generally consistent (and thus not globally minimax optimal) in the highdimensional regime (see also, Figure 2).

At a highlevel the proof follows by verifying that when the alternate hypothesis is true, under the condition on the critical radius in (19), the test statistic is larger than the threshold in (18). To verify this, we lower bound the mean and upper bound the variance of the test statistic under the alternate and then use standard concentration results. We defer the details to the Supplementary Material (Appendix B).
3.2 The rd + tail test
The truncated test described in the previous section, although globally minimax, is not locally adaptive. The test from [30], achieves the local minimax upper bound in Theorem 1. We refer to this as the rd + tail test. We use a slightly modified version of their test when testing Lipschitz goodnessoffit in Section 4, and provide a description here.
The test is a composite twostage test, and has a tuning parameter . Recalling the definitions of and (see (12)), we define two test statistics and corresponding test thresholds :
We define two tests:

The tail test:

The 2/3test: .
The composite test is then given as:
(20) 
With these definitions in place, the following result is essentially from the work of [30]. We use to denote a positive universal constant.
Theorem 3.
Remarks:

The test is also motivated by deficiencies of the test. In particular, the test includes two main modifications to the test which limit the contribution of the small entries of : some of the small entries of are dealt with via a separate tail test and further the normalization of the test is changed from to .
While the 2/3rd norm test is locally minimax optimal its analysis is quite challenging. In the next section, we build on results from a recent paper of Diakonikolas and Kane [10] to provide an alternative (nearly) locally minimax test with a simpler analysis.
3.3 The Max Test
An important insight, one that is seen for instance in Figure 1, is that many simple tests are optimal when is uniform and that careful modifications to the test are required only when is far from uniform. This suggests the following strategy: partition the multinomial into nearly uniform groups, apply a simple test within each group and combine the tests with an appropriate Bonferroni correction. We refer to this as the max test. Such a strategy was used by Diakonikolas and Kane [10]
, but their test is quite complicated and involves many constants. Furthermore, it is not clear how to ensure that their test controls the Type I error at level
. Motivated by their approach, we present a simple test that controls the type I error as required and is (nearly) locally minimax.As with the test in the previous section, the test has to be combined with the tail test. The test is defined to be
where is defined as follows. We partition into sets for , where
We can bound the total number of sets by noting that for any , we have that , so that the number of sets is bounded by . Within each set we use a modified statistic. Let
(22) 
for . Unlike the traditional statistic, each term in this statistic is centered around . As observed in [30], this results in the statistic having smaller variance in the regime. Let
(23) 
where
(24) 
Theorem 4.
Remarks:

In contrast to the analysis of the 2/3rd + tail test in [30], the analysis of the max test involves mostly elementary calculations. We provide the details in Appendix B. As emphasized in the work of [10], the reduction of testing problems to simpler testing problems (in this case, testing uniformity) is a more broadly useful idea. Our upper bound for the Lipschitz testing problem (in Section 4) proceeds by reducing it to a multinomial testing problem through a spatially adaptive binning scheme.
3.4 Simulations
In this section, we report some simulation results that demonstrate the practicality of the proposed tests. We focus on two simulation scenarios and compare the globallyminimax truncated test, and the 2/3rd + tail test [30] with the classical test and the likelihood ratio test. The statistic is,
and the likelihood ratio test statistic is
In Appendix G, we consider a few additional simulations as well as a comparison with statistics based on the and distances.
In each setting described below, we set the level threshold via simulation (by sampling from the null 1000 times) and we calculate the power under particular alternatives by averaging over a 1000 trials.

Figure 1 considers a highdimensional setting where , the null distribution is uniform, and the alternate is either dense (perturbing each coordinate by a scaled Rademacher) or sparse (perturbing only two coordinates).
In each case we observe that all the tests perform comparably indicating that a variety of tests are optimal around the uniform distribution, a fact that we exploit in designing the max test. The test from
[30] performs slightly worse than others due to the Bonferroni correction from applying a twostage test. 
Figure 2 considers a powerlaw null where . Again we take , and compare against a dense and sparse alternative. In this setting, we choose the sparse alternative to only perturb the first two coordinates of the distribution.
We observe two notable effects. First, we see that when the alternate is dense, the truncated test, although consistent in the highdimensional regime, is outperformed by the other tests highlighting the need to study the localminimax properties of tests. Perhaps more surprising is that in the setting where the alternate is sparse, the classical and likelihood ratio tests can fail dramatically.
The locally minimax test is remarkably robust across simulation settings. However, it requires that we specify , a drawback shared by the max test. In Appendix F we provide adaptive alternatives that adapt to unknown .
4 Testing Lipschitz Densities
In this section, we focus our attention on the Lipschitz testing problem (6). As is standard in nonparametric problems, throughout this section, we treat the dimension as a fixed (universal) constant. Our emphasis is on understanding the local critical radius while making minimal assumptions. In contrast to past work, we do not assume that the null is uniform or even that its support is compact. We would like to be able to detect more subtle deviations from the null as the sample size gets large and hence we do not assume that the Lipschitz parameter is fixed as grows.
The classical method, due to Ingster [15, 16] to constructing lower and upper bounds on the critical radius, is based on binning the domain of the density. In particular, upper bounds were obtained by considering tests applied to the multinomial that results from binning the null distribution. Ingster focused on the case when the null distribution was taken to be uniform on
, noting that the testing problem for a general null distribution could be “reduced” to testing uniformity by modifying the observations via the quantile transformation corresponding to the null distribution
(see also [12]). We emphasize that such a reduction alters the smoothness class tailoring it to the null distribution . The quantile transformation forces the deviations from the null distribution to be more smooth in regions where is small and less smooth where is large, i.e. we need to reinterpret smoothness of the alternative density as an assumption about the function where is the quantile function of the null distribution . We find this assumption to be unnatural and instead aim to directly test the hypotheses in (6).We begin with some highlevel intuition for our upper and lower bounds.

Upper bounding the critical radius: The strategy of binning domain of , and then testing the resulting multinomial against an appropriate neighborhood using a locally minimax test is natural even when is not uniform. However, there is considerable flexibility in how precisely to bin the domain of . Essentially, the only constraint in the choice of binwidths is that the approximation error (of approximating the density by its piecewise constant, histogram approximation) is sufficiently wellcontrolled. When the null is not uniform the choice of fixed binwidths is arbitrary and as we will see, suboptimal. A bulk of the technical effort in constructing our optimal tests is then in determining the optimal inhomogenous, spatially adaptive partition of the domain in order to apply a multinomial test.

Lower bounding the critical radius: At a highlevel the construction of Ingster is similar to standard lower bounds in nonparametric problems. Roughly, we create a collection of possible alternate densities, by evenly partitioning the domain of , and then perturbing each cell of the partition by adding or subtracting a small (sufficiently smooth) bump. We then analyze the optimal likelihood ratio test for the (simple versus simple) testing problem of distinguishing from a uniform mixture of the set of possible alternate densities. We observe that when is not uniform once again creating a fixed binwidth partition is not optimal. The optimal choice of binwidths is to choose larger binwidths when is large and smaller binwidths when is small. Intuitively, this choice allows us to perturb the null distribution more when the density is large, without violating the constraint that the alternate distribution remain sufficiently smooth. Once again, we create an inhomogenous, spatially adaptive partition of the domain, and then use this partition to construct the optimal perturbation of the null.
Define,
(26) 
and for any
denote the collection of sets of probability mass at least
as , i.e. Define the functional,(27) 
We refer to this as the truncated functional^{3}^{3}3Although the set that achieves the minimum in the definition of need not be unique, the functional itself is welldefined.. The functional is the analog of the functional in (14), and roughly characterizes the local critical radius. We return to study this functional in light of several examples, in Section 4.1 (and Appendix D).
In constructing lower bounds we will assume that the null density lies in the interior of the Lipschitz ball, i.e. we assume that for some constant , we have that, This assumption avoids certain technical issues that arise in creating perturbations of the null density when it lies on the boundary of the Lipschitz ball.
Finally, we define for two universal constants (that are explicit in our proofs) the upper and lower critical radii:
(28) 
With these preliminaries in place we now state our main result on testing Lipschitz densities. We let denote two positive universal constants (different from the ones above).
Theorem 5.
The local critical radius for testing Lipschitz densities is upper bounded as:
(29) 
Furthermore, if for some constant we have that, then the critical radius is lower bounded as
(30) 
Remarks:

A natural question of interest is to understand the worstcase rate for the critical radius, or equivalently to understand the largest that the functional can be. Since the functional can be infinite if the support is unrestricted, we restrict our attention to Lipschitz densities with a bounded support . In this case, letting denote the Lebesgue measure of and using Hölder’s inequality (see Appendix D) we have that for any ,
(31) Up to constants involving this is attained when is uniform on the set . In other words, the critical radius is maximal for testing the uniform density against a Lipschitz, neighborhood. In this case, we simply recover a generalization of the result of [15] for testing when is uniform on .

The main discrepancy between the upper and lower bounds is in the truncation level, i.e. the upper and lower bounds depend on the functional for different values of the parameter . This is identical to the situation in Theorem 1 for testing multinomials. In most nonpathological examples this functional is stable with respect to constant factor discrepancies in the truncation level and consequently our upper and lower bounds are typically tight (see the examples in Section 4.1). In the Supplementary Material (see Appendix D) we formally study the stability of the functional. We provide general bounds and relate the stability of the functional to the stability of the levelsets of .
The remainder of this section is organized as follows: we first consider various examples, calculate the functional and develop the consequences of Theorem 5 for these examples. We then turn our attention to our adaptive binning, describing both a recursive partitioning algorithm for constructing it as well as developing some of its useful properties. Finally, we provide the body of our proof of Theorem 5 and defer more technical aspects to the Supplementary Material. We conclude with a few illustrative simulations.
4.1 Examples
The result in Theorem 5 provides a general characterization of the critical radius for testing any density , against a Lipschitz, neighborhood. In this section we consider several concrete examples. Although our theorem is more generally applicable, for ease of exposition we focus on the setting where , highlighting the variability of the functional and consequently of the critical radius as the null density is changed. Our examples have straightforward dimensional extensions.
When , we have that so the functional is simply:
where is as before. Our interest in general is in the setting where (which happens as ), so in some examples we will simply calculate . In other examples however, the truncation at level will play a crucial role and in those cases we will compute .
Example 1 (Uniform null).
Suppose that the null distribution is Uniform then,
Example 2 (Gaussian null).
Suppose that the null distribution is a Gaussian, i.e. for some ,
In this case, a simple calculation (see Appendix C) shows that,
Example 3 (Beta null).
Suppose that the null density is a Beta distribution:
where and denote the gamma and beta functions respectively. It is easy to verify that,
To get some sense of the behaviour of this functional, we consider the case when In this case, we show (see Appendix C) that for ,
In particular, we have that .
Remark:

These examples illustrate that in the simplest settings when the density is close to uniform, the functional is roughly the effective support of . In each of these cases, it is straightforward to verify that the truncation of the functional simply affects constants so that the critical radius scales as:
where in each case scales as roughly the size of the support of the density , i.e. as the Lebesgue measure of the smallest set that contains probability mass. This motivates understanding the Lipschitz density with smallest effective support, and we consider this next.
Example 4 (Spiky null).
Suppose that the null hypothesis is:
then we have that
Remark:

For the spiky null distribution we obtain an extremely fast rate, i.e. we have that the critical radius and is independent of the Lipschitz parameter (although, we note that the null is more spiky as increases). This is the fastest rate we obtain for Lipschitz testing. In settings where the tail decay is slow, the truncation of the functional can be crucial and the rates can be much slower. We consider these examples next.
Example 5 (Cauchy distribution).
The mean zero, Cauchy distribution with parameter
has pdf:As we show (see Appendix C), the functional without truncation is infinite, i.e. . However, the truncated functional is finite. In the Supplementary Material we show that for any (recall that our interest is in cases where ),
i.e. we have that .
Remark:

When the null distribution is Cauchy as above, we note that the rate for the critical radius is no longer the typical even when the other problem specific parameters ( and the Cauchy parameter ) are held fixed. We instead obtain a slower rate. Our final example, shows that we can obtain an entire spectrum of slower rates.
Example 6 (Pareto null).
For a fixed and for , suppose that the null distribution is
This distribution for has thicker tails than the Cauchy distribution. The functional without truncation is infinite, i.e. , and we can further show that (see Appendix C):
In the regime of interest when , we have that
Remark:

We observe that once again, the critical radius no longer follows the typical rate: Instead we obtain the rate, and indeed have much slower rates as , indicating the difficulty of testing heavytailed distributions against a Lipschitz, neighborhood.
We conclude this section by emphasizing the value of the local minimax perspective and of studying the goodnessoffit problem beyond the uniform null. We are able to provide a sharp characterization of the critical radius for a broad class of interesting examples, and we obtain faster (than at uniform) rates when the null is spiky and nonstandard rates in cases when the null is heavytailed.
4.2 A recursive partitioning scheme
At the heart of our upper and lower bounds are spatially adaptive partitions of the domain of . The partitions used in our upper and lower bounds are similar but not identical. In this section, we describe an algorithm for producing the desired partitions and then briefly describe some of the main properties of the partition that we leverage in our upper and lower bounds.
We begin by describing the desiderata for the partition from the perspective of the upper bound. Our goal is to construct a test for the hypotheses in (6), and we do so by constructing a partition (consisting of cells) of . Each cell for will be a cube, while the cell will be arbitrary but will have small total probability content. We let,
(32) 
We form the multinomial corresponding to the partition where . We then test this multinomial using the counts of the number of samples falling in each cell of the partition.
Requirement 1: A basic requirement of the partition is that it must ensure that a density that is at least far away in distance from should remain roughly away from when converted to a multinomial. Formally, for any such that we require that for some small constant ,
(33) 
Of course, there are several ways to ensure this condition is met. In particular, supposing that we restrict attention to densities supported on then it suffices for instance to choose roughly evenwidth bins. This is precisely the partition considered in prior work [15, 16, 1]. When we do not restrict attention to compactly supported, uniform densities an evenwidth partition is no longer optimal and a careful optimization of the upper and lower bounds with respect to the partition yields the optimal choice. The optimal partition has binwidths that are roughly taken proportional to , where the constant of proportionality is chosen to ensure that the condition in (33) is satisfied. Precisely determining the constant of proportionality turns out to be quite subtle so we defer a discussion of this to the end of this section.
Requirement 2: A second requirement that arises in both our upper and lower bounds is that the cells of our partition (except ) are not chosen too wide. In particular, we must choose the cells small enough to ensure that the density is roughly constant on each cell, i.e. on each cell we need that for any ,
(34) 
Using the Lipschitz property of , this condition is satisfied if any point is in a cell of diameter at most .
Taken together the first two requirements suggest that we need to create a partition such that: for every point the diameter of the cell containing the point , should be roughly,
where is to be chosen to be smaller than , and is chosen to ensure that Requirement 1 is satisfied.
Algorithm 1 constructively establishes the existence of a partition satisfying these requirements. The upper and lower bounds use this algorithm with slightly different parameters. The key idea is to recursively partition cells that are too large by halving each side. This is illustrated in Figure 3. The proof of correctness of the algorithm uses the smoothness of in an essential fashion. Indeed, were the density not sufficiently smooth then such a partition would likely not exist.
In order to ensure that the algorithm has a finite termination, we choose two parameters (these are chosen sufficiently small to not affect subsequent results):

We restrict our attention to the effective support of , i.e. we define to be the smallest cube centered at the mean of such that, We begin with .

If the density in any cell is sufficiently small we do not split the cell further, i.e. for a parameter , if then we do not split it, rather we add it to . By construction, such cells have total probability content at most .
For each cube for we let denote its centroid, and we let denote the number of cubes created by Algorithm 1.

[leftmargin=0cm]

Input: Parameters

Set and .

If no cubes are split or removed, STOP. Else go to step 3.

Output: Partition .
Requirement 3: The final major requirement is twofold: (1) we require that the norm of the density over the support of the partition should be upper bounded by the truncated functional, and (2) that the density over the cells of the partition be sufficiently large. This necessitates a further pruning of the partition, where we order cells by their probability content and successively eliminate (adding them to ) cells of low probability until we have eliminated mass that is close to the desired truncation level. This is accomplished by Algorithm 2.