1 Introduction
The literature on nonparametric monotonicity testing deals usually with the model
where
is a scalar dependent random variable,
a scalar independent random variable, an unknown function, and an unobserved scalar random variable with. We are interested in testing the null hypothesis,
that is increasing against the alternative, that there are and such that and . The decision is to be made based on the i.i.d. sample from the distribution of . Typical applications of monotonicity testing are related to econometric models, see, e.g., Chetverikov [4].Usual approaches to this problem have in their core simple heuristic ideas and assumptions. So, the tests proposed in Gijbels et. al.
[9] and Ghosal, Sen, and van der Vaart [8] are based on the signs of . Hall and Heckman [10]developed a test based on the slopes of local linear estimates of
. Along with these papers we can cite Schlee [15], Bowman, Jones, and Gijbels [2], Dümbgen and Spokoiny [6], Durot [7], Baraud, Huet, and Laurent [1], Wang and Meyer [17], and Chetverikov [4]. As to typical hypothesis about , it is often assumed that is a Lipschitz function, i.e.,where the constant may be known or unknown.
In this paper, we look at the problem of monotonicity testing from a little different and less intuitive viewpoint. As we will see below, our approach permits, in particular, to understand links between this problem and sparse vectors detection and to construct new powerful tests. In order to simplify technical details and to get rid of supplementary assumptions, we begin with monotonicity testing of an unknown function
, in the socalled white noise model similar to that one considered in
[6]. So, it is assumed we have at our disposal the noisy data(1) 
where is a standard white Gaussian noise and is a known noise level. With the help of these observations we want to test
the null hypothesis  
vs. the alternative  
Our approach to this problem is based on estimating the following linear functionals:
for all that are admissible, i.e., such that . It is clear that may be interpreted as approximations of the derivative since
for any given .
With the help of (1), the functionals are estimated as follows:
and these estimates admit the obvious representation
(2) 
where
Notice that if is true, then for all admissible , otherwise ( is true) there exist such that . That is why in what follows we will focus on testing
(3) 
based on the observations (2).
Let us denote for brevity
In order to explain our approach to the problem (3), we begin with the simple case assuming that are given. So, we have to test two composite hypotheses
Intuitively, the most powerful test with the type I error probability
rejects if(4) 
where is
value of the standard Gaussian distribution, i.e., a solution to
where
Of course, there exist a lot of motivations for this test. In this paper, we make use of the socalled improper Bayes approach assuming that in (2
) is a random variable uniformly distributed on the interval
, if is true, and on if is true. So, we observe a random variable with the probability densityand
Thus, we deal with the simple hypothesis testing and by the NeymanPearson lemma, the most powerful test at significance level rejects when
Taking the limit in this equation as , we arrive at the improper Bayes test that rejects if
(5) 
where
(6) 
Since is decreasing in , the tests (4) and (5) are obviously equivalent.
In what follows, we will make use of the following asymptotic result:
(7) 
Along with this method, one can apply the maximum likelihood (ML) or minimax approaches. Finally, all these methods result in (4) but their initial forms are different. For instance, the ML test rejects when
(8) 
Emphasize that from a viewpoint of testing vs. there is no difference between (8) and (5), but the aggregation of these methods for testing vs. from (3) results in different tests. In this paper, we make use of the tests defined by (5) since their aggregation is simple.
In order to aggregate the statistical tests, we will make use of the socalled multiresolution approach assuming that

belongs to the following set of dyadic bandwidths

belongs to the family of dyadic grids , defined by
There are simple arguments motivating these assumptions
2 Testing at a given resolution level
Let us fix some bandwidth and denote for brevity by . In this section, we focus on testing
the null hypothesis  
vs. the alternative  
In order to construct Bayes and MAP tests, we assume that for given

the set contains the only one negative entry ;

is an unobservable random variable uniformly distributed on .
2.1 A Bayes test
With the arguments used in deriving (5), we get the following Bayes test: is rejected if
where is defined by (6). The critical level is defined by a conservative way, i.e., as a solution to
where here stands for the measure generated by observations defined by (2) for given .
It follows from Mudholkar’s theorem [12], see also Theorem 6.2.1 in [16], that for any with nonnegative entries
(9) 
and, thus, may be computed as a solution to
(10) 
Therefore our next step is to study the following random variable:
2.1.1 A weak approximation of
We begin with computing a weak limit of as . Recall some standard definitions (see, e.g., [13]).
Definition.
Let and be independent copies of a random variable . Then is said to be stable if for any constants and the random variable has the same distribution as for some constants and .
In the class of stable distributions there is an interesting subclass of the socalled stable distributions with the index of stability . For brevity, we will call them 1stable distributions. The formal definition of this class is as follows:
Definition.
The next theorem shows that the weak limit of is a 1stable distribution.
Theorem 1.
where is Euler’s constant.
In other words, this theorem states that
where is a 1stable random variable (see (11)) with
(12) 
Apparently, appeared firstly in [5]. Emphasize also that this random variable originate usually in Bayes hypothesis testing related to sparse vectors, see e.g. [3], [11].
The probability distribution of has the following invariance property that plays an important role in Bayes tests aggregation.
Proposition 1.
Let be i.i.d. copies of and be a probability distribution on with a bounded entropy. Then
(13) 
2.1.2 A strong approximation of
Theorem 1 is not very informative about the tail behavior of the distribution of . However, for obtaining a good approximation of in (10) this behavior may play a crucial role because in some applications may be very small (of order ) and so, the MonteCarlo method and Theorem 1 may not be good in this case.
Therefore our goal is to find an approximation of that controls well the tail of its distribution. Fortunately, this can be easily done. It is clear that
where are i.i.d. random variables uniformly distributed on . Hence
where is a nondecreasing permutation of . The distribution of can be easily obtained with the help of the Pyke theorem [14]
(14) 
where
is the cumulative sum of i.i.d. standard exponentially distributed random variables
In other words, . With this in mind, we obtain
(15) 
Next, we make use of the following simple equations:
and
So, substituting them in (15), we arrive at the following theorem.
Theorem 2.
Let
(16) 
Then
(17) 
where is such that
(18) 
Remark.
Figure 1 illustrates numerically Theorem 2 and the above remark showing logtail approximation error
computed with the help of the MonteCarlo method with replications. This picture shows that even for small the approximation (17) works very good.
2.2 A MAP test
Similarly to the Bayes test, we can construct the MAP test that rejects if
where is defined as a solution to
Similarly to (9), may be obtained from
3 Multilevel testing
3.1 MAP multilevel tests
A heuristic idea behind our construction of multilevel MAP tests for (3) is related to (19) and consists in computing a positive deterministic function bounding from above the random process where are independent standard exponential random variables. In other words, we are looking for such that
would be a nondegenerate random variable.
Let be value of , i.e., solution to
Therefore with (19), upper bounding random process by , we arrive at the test that rejects if
(20) 
Computing is based on the following simple fact. Assume that
Then
(21) 
The proof of this identity is very simple. Indeed,
Proposition 2.
Let be a probability distribution on . Then
(22) 
Summarizing (see (20)), the MAP multilevel test rejects if
(23) 
where
and is a probability distribution on .
In order to study the performance of this method, we analyze the type II error probability. For given
and define(24) 
In other words, we consider the situation, where all shifts in (2) are positive except the only one. The position of the negative entry and its amplitude are unknown, but it is assumed that are random variables with the distribution defined by

,

,
where is a probability distribution on with a bounded entropy
In what follows, we will deal with priors with large uncertainties assuming that , or more precisely, but such that
(25) 
In particular, we will consider the following class of prior distributions:
(26) 
This class is characterized by the bandwidth and the probability density , which is assumed to be continuous, bounded, and with
(27) 
A typical example of a such distribution is the uniform one that corresponds to .
It is clear that as and that Condition (25) holds.
Let us begin with the case, where the prior distribution is known, the case of unknown will be considered later in Section 4.
The type II error probability over of the MAP test (23) is defined as follows:
Our goal is to study the average type II error probability
where here and below .
Denote for brevity
and
The next theorem shows that is a critical signal/noise ratio. Roughly speaking, this means that if
for any given , then the MAP multilevel test cannot discriminate between and . Otherwise, if
for some , then reliable testing is possible.
In the next theorem, stands for the expectation w.r.t. .
Theorem 3.
3.2 Multilevel Bayes tests
To construct these tests, let us consider the following statistics:
When all , in view of Theorem 2, these random variables are approximated by the family of independent and identically distributed random variables , defined by (16). An important property of this family is provided by (13), which is used in our construction multilevel Bayes tests. More precisely, the multilevel Bayes test rejects if
where is value of .
The type II error probability over (see (24)) is defined by
and our goal is to analyze the average type II error probability
Theorem 4.
4 Adaptive multilevel tests
The main drawback of the MAP and Bayes tests is related to their dependence on the prior distribution that is hardly known in practice. Therefore our next goal is to construct a test that, on the one hand, does not depend on , but on the other hand, has a nearly optimal critical signalnoise ratio.
In order to simplify our presentation, we will deal with the class of prior distributions defined by (26). The entropy of obviously satisfies
(36) 
and therefore denote for brevity
(37) 
Corollary 1.
If for some and
then
If for some
then
In order to construct an adaptive test, let us compute a nearly minimal function in (21). We begin with
and then iterate this function times
Finally, for given , define
(38) 
Since , it is clear that
Comments
There are no comments yet.