The literature on non-parametric monotonicity testing deals usually with the model
is a scalar dependent random variable,a scalar independent random variable, an unknown function, and an unobserved scalar random variable with
. We are interested in testing the null hypothesis,that is increasing against the alternative, that there are and such that and . The decision is to be made based on the i.i.d. sample from the distribution of . Typical applications of monotonicity testing are related to econometric models, see, e.g., Chetverikov .
Usual approaches to this problem have in their core simple heuristic ideas and assumptions. So, the tests proposed in Gijbels et. al. and Ghosal, Sen, and van der Vaart  are based on the signs of . Hall and Heckman 
developed a test based on the slopes of local linear estimates of. Along with these papers we can cite Schlee , Bowman, Jones, and Gijbels , Dümbgen and Spokoiny , Durot , Baraud, Huet, and Laurent , Wang and Meyer , and Chetverikov . As to typical hypothesis about , it is often assumed that is a Lipschitz function, i.e.,
where the constant may be known or unknown.
In this paper, we look at the problem of monotonicity testing from a little different and less intuitive viewpoint. As we will see below, our approach permits, in particular, to understand links between this problem and sparse vectors detection and to construct new powerful tests. In order to simplify technical details and to get rid of supplementary assumptions, we begin with monotonicity testing of an unknown function
, in the so-called white noise model similar to that one considered in. So, it is assumed we have at our disposal the noisy data
where is a standard white Gaussian noise and is a known noise level. With the help of these observations we want to test
|the null hypothesis|
|vs. the alternative|
Our approach to this problem is based on estimating the following linear functionals:
for all that are admissible, i.e., such that . It is clear that may be interpreted as approximations of the derivative since
for any given .
With the help of (1), the functionals are estimated as follows:
and these estimates admit the obvious representation
Notice that if is true, then for all admissible , otherwise ( is true) there exist such that . That is why in what follows we will focus on testing
based on the observations (2).
Let us denote for brevity
In order to explain our approach to the problem (3), we begin with the simple case assuming that are given. So, we have to test two composite hypotheses
Intuitively, the most powerful test with the type I error probabilityrejects if
-value of the standard Gaussian distribution, i.e., a solution to
Of course, there exist a lot of motivations for this test. In this paper, we make use of the so-called improper Bayes approach assuming that in (2
) is a random variable uniformly distributed on the interval, if is true, and on if is true. So, we observe a random variable with the probability density
Thus, we deal with the simple hypothesis testing and by the Neyman-Pearson lemma, the most powerful test at significance level rejects when
Taking the limit in this equation as , we arrive at the improper Bayes test that rejects if
In what follows, we will make use of the following asymptotic result:
Along with this method, one can apply the maximum likelihood (ML) or minimax approaches. Finally, all these methods result in (4) but their initial forms are different. For instance, the ML test rejects when
Emphasize that from a viewpoint of testing vs. there is no difference between (8) and (5), but the aggregation of these methods for testing vs. from (3) results in different tests. In this paper, we make use of the tests defined by (5) since their aggregation is simple.
In order to aggregate the statistical tests, we will make use of the so-called multi-resolution approach assuming that
belongs to the following set of dyadic bandwidths
belongs to the family of dyadic grids , defined by
There are simple arguments motivating these assumptions
2 Testing at a given resolution level
Let us fix some bandwidth and denote for brevity by . In this section, we focus on testing
|the null hypothesis|
|vs. the alternative|
In order to construct Bayes and MAP tests, we assume that for given
the set contains the only one negative entry ;
is an unobservable random variable uniformly distributed on .
2.1 A Bayes test
With the arguments used in deriving (5), we get the following Bayes test: is rejected if
where is defined by (6). The critical level is defined by a conservative way, i.e., as a solution to
where here stands for the measure generated by observations defined by (2) for given .
and, thus, may be computed as a solution to
Therefore our next step is to study the following random variable:
2.1.1 A weak approximation of
We begin with computing a weak limit of as . Recall some standard definitions (see, e.g., ).
Let and be independent copies of a random variable . Then is said to be stable if for any constants and the random variable has the same distribution as for some constants and .
In the class of stable distributions there is an interesting sub-class of the so-called stable distributions with the index of stability . For brevity, we will call them 1-stable distributions. The formal definition of this class is as follows:
A random variable is called 1-stable if its characteristic function can be written as
is called 1-stable if its characteristic function can be written as
The next theorem shows that the weak limit of is a 1-stable distribution.
where is Euler’s constant.
In other words, this theorem states that
where is a 1-stable random variable (see (11)) with
The probability distribution of has the following invariance property that plays an important role in Bayes tests aggregation.
Let be i.i.d. copies of and be a probability distribution on with a bounded entropy. Then
2.1.2 A strong approximation of
Theorem 1 is not very informative about the tail behavior of the distribution of . However, for obtaining a good approximation of in (10) this behavior may play a crucial role because in some applications may be very small (of order ) and so, the Monte-Carlo method and Theorem 1 may not be good in this case.
Therefore our goal is to find an approximation of that controls well the tail of its distribution. Fortunately, this can be easily done. It is clear that
where are i.i.d. random variables uniformly distributed on . Hence
where is a non-decreasing permutation of . The distribution of can be easily obtained with the help of the Pyke theorem 
is the cumulative sum of i.i.d. standard exponentially distributed random variables
In other words, . With this in mind, we obtain
Next, we make use of the following simple equations:
So, substituting them in (15), we arrive at the following theorem.
where is such that
computed with the help of the Monte-Carlo method with replications. This picture shows that even for small the approximation (17) works very good.
2.2 A MAP test
Similarly to the Bayes test, we can construct the MAP test that rejects if
where is defined as a solution to
Similarly to (9), may be obtained from
3 Multi-level testing
3.1 MAP multi-level tests
A heuristic idea behind our construction of multi-level MAP tests for (3) is related to (19) and consists in computing a positive deterministic function bounding from above the random process where are independent standard exponential random variables. In other words, we are looking for such that
would be a non-degenerate random variable.
Let be -value of , i.e., solution to
Therefore with (19), upper bounding random process by , we arrive at the test that rejects if
Computing is based on the following simple fact. Assume that
The proof of this identity is very simple. Indeed,
Let us we denote
then (21) can be rewritten in the following form:
Let be a probability distribution on . Then
Summarizing (see (20)), the MAP multi-level test rejects if
and is a probability distribution on .
In order to study the performance of this method, we analyze the type II error probability. For givenand define
In other words, we consider the situation, where all shifts in (2) are positive except the only one. The position of the negative entry and its amplitude are unknown, but it is assumed that are random variables with the distribution defined by
where is a probability distribution on with a bounded entropy
In what follows, we will deal with priors with large uncertainties assuming that , or more precisely, but such that
In particular, we will consider the following class of prior distributions:
This class is characterized by the bandwidth and the probability density , which is assumed to be continuous, bounded, and with
A typical example of a such distribution is the uniform one that corresponds to .
It is clear that as and that Condition (25) holds.
Let us begin with the case, where the prior distribution is known, the case of unknown will be considered later in Section 4.
The type II error probability over of the MAP test (23) is defined as follows:
Our goal is to study the average type II error probability
where here and below .
Denote for brevity
The next theorem shows that is a critical signal/noise ratio. Roughly speaking, this means that if
for any given , then the MAP multi-level test cannot discriminate between and . Otherwise, if
for some , then reliable testing is possible.
In the next theorem, stands for the expectation w.r.t. .
Suppose (25) holds. If for some and
If for some
3.2 Multi-level Bayes tests
To construct these tests, let us consider the following statistics:
When all , in view of Theorem 2, these random variables are approximated by the family of independent and identically distributed random variables , defined by (16). An important property of this family is provided by (13), which is used in our construction multi-level Bayes tests. More precisely, the multi-level Bayes test rejects if
where is -value of .
The type II error probability over (see (24)) is defined by
and our goal is to analyze the average type II error probability
Suppose (25) holds and for some and
If for some
4 Adaptive multi-level tests
The main drawback of the MAP and Bayes tests is related to their dependence on the prior distribution that is hardly known in practice. Therefore our next goal is to construct a test that, on the one hand, does not depend on , but on the other hand, has a nearly optimal critical signal-noise ratio.
In order to simplify our presentation, we will deal with the class of prior distributions defined by (26). The entropy of obviously satisfies
and therefore denote for brevity
If for some and
If for some
In order to construct an adaptive test, let us compute a nearly minimal function in (21). We begin with
and then iterate this function times
Finally, for given , define
Since , it is clear that