1 Introduction
The idea of using quantiles in classification is relatively recent and largely unexplored. The median classifier for highdimensional problems proposed by Hall, Titterington and Xue (2009), which calculates the
distance of the coordinates of a multivariate data point from componentwise medians (rather than centroids), is particularly advantageous when data exhibit heavytailed or skewed distributions. Building on
Hall, Titterington and Xue’s (2009) idea, Hennig and Viroli (2016a) proposed quantile classifiers which hinge on the sum of distances from componentwise quantiles at some generic level . The ensemble quantile classifier by Lai and McLeod (2020)assigns weights to the componentwise distances by minimising a regularised loss function, where the regularisation parameter is determined by crossvalidation.
In all the studies mentioned above, quantiles are calculated marginally for each input variable (componentwise). This implies that their calculation ignores the possible interdependence among variables. In this study, we consider directional quantiles for multivariate distributions (Kong and Mizera, 2012) to address such a limitation. Our choice is motivated by several reasons. First, as already mentioned, the dependence among variables is taken into account by computing linear combinations of input variables. Second, directional quantiles have a simple interpretation since the projections’ weights embody the relative importance of the variables involved in the classification problem. Finally, in the special case of canonical directions (with equal to the number of variables), the use of directional quantiles leads to the componentwise quantile classifier (Hennig and Viroli, 2016a), and thus inherits asymptotic optimal properties as shown in Appendix. Directional quantiles have already found application in risk classification problems (Geraci et al., 2020) and proved to be a worthwhile alternative to risk classification based on componentwise quantile thresholds.
In general, the application of our methods does not require any assumption on the shape of the population distributions. We derive asymptotic theoretical properties of the proposed classifier, under the assumption that distributions for alternative populations differ by at most a locationshift. While this assumption may be unrealistic in practice, empirical results support the merit of the proposed classifier also when the distributions differ by shape and not just by location.
The rest of the paper is organised as follows. In the next section, we introduce notation and basic definitions, followed by our proposal of directional quantile classifiers. Theoretical results are stated in Section 3. We report the results of a simulation study in Section 4 and of a real data analysis in Section 5. Concluding remarks are given in Section 6. All proofs of theoretical results are reported in Appendix Appendix A  Proofs of Theorems. A software implementation of our approach can be found in the package Qtools (Geraci, 2016), freely available on the Comprehensive R Archive Network (R Core Team, 2020).
2 Methods
2.1 Notation and definitions
Let and denote two
variate random variables with absolutely continuous distributions
and defined on the same space for two populations and , respectively. The marginal distributions of the components of are denoted by , for and . Further, denotes the indicator function which is equal to if its argument is true, and otherwise.Our goal is to assign a new observation to either or according to how close the point is to one or the other. In quantilebased classification (Hennig and Viroli, 2016a), the distance is first calculated for each component of using the asymmetrically weighted loss function
(1) 
for and , where is the componentwise quantile at level for the th population, which can be obtained by inversion of . Subsequently, is assigned to if the discrepancy
(2) 
is positive, and to otherwise. The quantile classifier reduces to the componentwise median classifier of Hall, Titterington and Xue (2009) for . An extension of (2) to more than two populations is straightforward.
The classification rule based on (2) does not acknowledge the possible interdependence among the variables, since quantiles are obtained marginally for each variable. We address this limitation by using directional quantiles for multivariate data (Kong and Mizera, 2012). We now explain our idea informally and, in the next section, give a rigorous treatment.
Define
to be a vector with unit norm in
. Throughout this paper, our focus will be on the projected random variables , , defined on . By assumption, the ’s are continuous. We denote the corresponding distribution and density functions with and , respectively.Our goal is to develop a classifier where the quantities in (1) are opportunely redefined on the corresponding projections along to capture the multivariate nature of the distributions, namely
(3) 
for k = 1,2, where is the th quantile of . The latter is obtained by inverting and it can be recognised as the th directional quantile of in the direction (Kong and Mizera, 2012).
By working with projections, we basically summarise a multivariate problem as a univariate one. Clearly, one difficulty to address is how many and which directions should be considered. To this end, we should note that not all the directions are equally useful for classification. To exemplify, consider Figure 1
, which depicts bivariate normal samples from two independent populations centered at (1,1) and (3,3), respectively, and same variance. We want to assign the new observation
to one of the two populations. The logdensity atof two bivariate normal distributions with sample means and covariance matrices separately estimated from the two samples, is
and , respectively. This suggests that has been generated more likely from than from .Now compute , , as in (3) for four normalised directions. The results are reported in Table 1. Based on a principle of minimum distance, we assign to , thus consistently with a maximum likelihood principle, for three, though not all four, directions.
0.27  0.01  
1.58  0.07  
0.30  0.08  
0.03  0.23 
2.2 Directional quantile classifier
Let be a set of distinct quantile levels on . Also, define the set containing normalised directions associated with , , and let . (Note that for convenience one may set for .)
As mentioned in the previous section, we need to be wary of particular directions that may lead us to a classification error. Therefore, we introduce weights associated with each direction to decrease (or increase) their relative importance. Let denote the vector of all such weights. We propose the discrepancy
(4) 
where is defined in (3). Then our directional quantile classifier (DQC) assigns the observation to if , or to otherwise. Note that if , , , and the standard basis in , then (4) reduces to (2).
A difficulty associated with the calculation of (4) is the selection of quantile levels, directions, and weights in the training data, say , that give the best performance on the test data, say
. For some prior probabilities
and , let(5) 
denote the population probability of correct classification by the DQC. Note that maximising (
2.2) is equivalent to minimising the theoretical misclassification rate. For any given level and direction , the optimal misclassification rate is obtained whenand
which is equivalent to minimise
(6) 
In the general problem with populations, the minimum misclassification rate is obtained when
(7) 
Let . Given a sample of observations and corresponding class labels , we aim to solve
(8) 
Problem (8) may seem daunting, but luckily we can solve for rather easily. Given and , problem (8) is linear with unitnorm constraints and can be minimised by using the Lagrange multiplier method. This problem has a closedform solution given by with generic th element
where .
We now turn to how to choose directions and quantile levels. A crude solution would consist in doing a multidimensional grid search on dimensions. However, such a solution would become computationally prohibitive even at modest values of . Thankfully, we are able to mitigate the computational cost of a naïve numerical solution with some theoretical results (Section 3); in particular, with Theorem 1, which guarantees that for each projection there exists (at least) a quantile that leads to the optimal Bayes misclassification probability, and Theorem 2, which, conversely, identifies the best direction for a given quantile level. Unfortunately, a theoretical result for the simultaneous optimisation with respect to and does not exist. Nevertheless, we show that our DQC is asymptotically optimal (i.e. the misclassification rate goes to zero) when the number of directions increases with and (Theorem 3) under certain assumptions.
In summary, there are different possible approaches including randomly selecting one or more directions and using the optimal quantile levels associated with those directions; or spanning a grid of quantile levels and using the optimal directions associated with those quantiles. After some empirical investigation, we found that a strategy that gives satisfactory results in different settings is as follows. First, we define a grid of
values spanning the unit interval and, for each of these values, randomly draw a set of normalised directions from the hyperplane that is identified as optimal according to Theorem
2. The performance of a DQC based on each single value is evaluated using fivefold crossvalidation. In the end, we use a single quantile level (optimal according to crossvalidation), with the corresponding directions sampled from the optimal hyperplane. In particular, this strategy improves over the use of an asymptotically optimal quantile level when is small. Moreover, when is not too large, a similar strategy can be used to select an approximately optimal hyperplane.3 Theoretical results
In this section, we present theoretical results concerning our DQC. The proofs of lemmas and theorems are reported in the Appendix.
3.1 Optimal quantile level
We derive the theoretical rate of correct classification as a function of , for given . We assume populations, although results can be generalised to .
Lemma 1.
For given , let with corresponding inverse , density , and prior probability of correct classification , and let with corresponding inverse , density , and prior probability of correct classification . The probability of correct classification of the directional quantile classifier is
(9) 
where . Analogously, the theoretical misclassification rate is
(10) 
Theorem 1.
Assume that the density functions and exist for and are nonzero on the same compact domain . Further assume that there is a point with so that for on one side of and for on the other side of . Then the quantile classifier using the quantile that minimises the theoretical misclassification probability achieves the optimal Bayes misclassification probability.
The consistency of the classifier may be illustrated with an example. Consider a two class decision problem where one population is a locationshift version of the other. Figure 2 shows two distributions which have both the same right skewness. The quantiles and are marked by dashed lines. The median classifier (Hall, Titterington and Xue, 2009) in the upper panel leads to a nonoptimal misclassification probability equal to 0.30. However, the misclassification probability is reduced to 0.28 by setting .
3.2 Optimal direction
The next lemma and theorem give the optimal direction that minimises the misclassification rate at a given .
Lemma 2.
Let be a realisation of either or , then
where and , .
Theorem 2.
Let be a variate random variable such that , for , and let be a vector of constants, . We assume that
and its probability distribution function is
, for . Moreover, assume that , where is the quantile of . (Notice that there is no loss of generality with this assumption since the case can be reformulated as .) Under these assumptions, the normalised direction that minimises the misclassification error (2.2) is(11) 
The generalization of Theorem 2 to populations involves optimal directions for each of all the possible pairwise comparisons.
3.3 Asymptotic misclassification rate
In this section, we show that under certain assumptions, the correct classification probability converges to unity when the number of dimensions grows to infinity along with the sample size and the number of projections. The proof is built following a strategy similar to that used in Hall, Titterington and Xue (2009, Theorem 2), although our premises start from milder assumptions. In particular the projections are not required to obey the “ condition” (Bradley, 2005), which is rather strict in practice. Our theorem is developed for any , unit weights , and . Thus, the asymptotic result holds for subcomponents of the summation in (8), which are then weighted and summed to minimise the misclassification rate. Hence, the overall criterion inherits the optimal properties of its additive components.
As we did with the theorems in the previous sections, we present this theorem for classes. Its extension to classes requires contrasting each class against the remaining classes, consistently with (7).
Theorem 3.
Consider a set of directions sampled from a unit sphere and let , with and denoting the sample sizes of the two groups in the training set. Assume

For a constant , .

The variables have each the same distribution as , respectively. Moreover, and .

The first moments of the projections are uniformly bounded in a strong sense. This implies that
and , with such that 
For some , the proportion of values for which
multiplied by , say , is of larger order than , which means goes to zero as and increase.
Under the previous assumptions, the directional quantile classifier based on
makes the correct choice asymptotically. More specifically, as , the classifier makes the correct decision with probability
converging to 1 if both and diverge with , where , denotes the probability computed under the assumption that is drawn from population .
4 Simulation study
We assessed the performance of the proposed classifier in a simulation study under three scenarios with two populations. In the first scenario, observations were generated independently from a multivariate Student’s
distribution with 3 degrees of freedom, with either uncorrelated or correlated variables. In the second scenario, observations were generated as in the first scenario, but each variable was subsequently transformed according to
to induce asymmetry. In both cases, the two populations differed by a location shift equal to 0.4. Finally, in the third scenario observations were generated as in the first scenario, but each variable was subsequently transformed according to or to depending on whether observations belonged to one or the other population, respectively.Data were generated for each combination of overall sample size (with observations in each class) and dimension . All in all, this resulted in simulation cases. The scale matrix used in the multivariate distribution with correlated variables was generated randomly for each using the function rcorrmatrix with default settings as provided in the package clusterGeneration (Qiu and Joe, 2015; Joe, 2006). This resulted in nonconstant pairwise correlations on the interval . Observations in the training and test datasets were generated in the same way. Data generation under each setting was replicated 100 times.
We compared the directional quantile classifier (DQC) in terms of misclassification rate on the test data with that of the centroid classifier (Centroid) (Tibshirani et al., 2002), median classifier (Median) (Hall, Titterington and Xue, 2009), componentwise quantile classifier (CQC) (Hennig and Viroli, 2016a), ensemble quantile classifier (EQC) (Lai and McLeod, 2020), Fisher’s linear discriminant analysis (LDA),
nearest neighbour (KNN)
(Cover and Hart, 1967), penalised logistic regression (PLR)
(Park and Hastie, 2008), support vector machines (SVM)
(Cortes and Vapnik, 1995; Wang, Zhu and Zou, 2008), and naïve Bayes classifier (Bayes) (Hand and Yu, 2001). Tuning parameters for PLR, KNN, and SVM where selected using crossvalidation. For the CQC, the Galton correction was used to reduce skewness and optimal quantile was selected by minimising the error rate on the training set (Hennig and Viroli, 2016a).We used the package Qtools (Geraci, 2016, 2020) for the directional quantile classifier; the package quantileDA (Hennig and Viroli, 2016b) for the centroid, median and componentwise quantile classifiers; the package eqc (Lai and McLeod, 2019) for the ensemble quantile classifier; the package MASS (Venables and Ripley, 2002) for linear discriminant analysis; the package class (Venables and Ripley, 2002) for nearest neighbour; the package e1071 (Meyer et al., 2019) for support vector machines and Bayes classifiers; and the package stepPlr (Park and Hastie, 2018) for penalised logistic regression. All analyses were carried out in R version 4.0.0 (R Core Team, 2020).
The misclassification rates averaged over 100 replications for all simulation cases are reported in Tables 24. The results indicate that the performance of our proposed classifier improves as and increase, in agreement with the theoretical results. In the first two scenarios, our classifier outperforms the competitors in both scenarios when variables are uncorrelated. When variables are correlated, the proposed classifier still performs very well, even if it is not uniformly the best. In the third scenario where class distributions have different shapes, the performance of our classifier is often, but not always, the best.
Uncorrelated  Correlated  
Dimension  
Sample size  
DQC  0.334  0.187  0.120  0.020  0.315  0.202  0.128  0.028 
Centroid  0.355  0.232  0.168  0.049  0.349  0.277  0.189  0.059 
Median  0.372  0.230  0.153  0.043  0.357  0.252  0.170  0.047 
CQC  0.362  0.273  0.220  0.180  0.367  0.284  0.222  0.177 
EQC  0.373  0.240  0.172  0.044  0.339  0.253  0.168  0.055 
LDA  0.365  0.382  0.295  0.313  0.245  0.252  0.308  0.339 
KNN  0.362  0.287  0.263  0.212  0.360  0.300  0.271  0.211 
PLR  0.348  0.199  0.134  0.023  0.275  0.154  0.103  0.025 
SVM  0.413  0.252  0.140  0.046  0.401  0.263  0.140  0.049 
Bayes  0.390  0.333  0.302  0.225  0.395  0.327  0.287  0.237 
Sample size  
DQC  0.306  0.145  0.089  0.015  0.283  0.146  0.076  0.017 
Centroid  0.325  0.181  0.114  0.025  0.331  0.210  0.132  0.036 
Median  0.334  0.194  0.129  0.032  0.339  0.211  0.136  0.033 
CQC  0.343  0.213  0.151  0.076  0.341  0.223  0.157  0.092 
EQC  0.337  0.213  0.135  0.039  0.329  0.193  0.138  0.040 
LDA  0.338  0.236  0.393  0.182  0.226  0.055  0.105  0.240 
KNN  0.346  0.214  0.184  0.102  0.325  0.223  0.191  0.108 
PLR  0.329  0.182  0.113  0.019  0.240  0.092  0.056  0.020 
SVM  0.370  0.176  0.106  0.032  0.382  0.153  0.069  0.034 
Bayes  0.367  0.284  0.227  0.179  0.371  0.291  0.242  0.179 
Sample size  
DQC  0.286  0.128  0.069  0.010  0.263  0.126  0.058  0.010 
Centroid  0.300  0.145  0.080  0.014  0.288  0.154  0.077  0.016 
Median  0.320  0.173  0.101  0.020  0.315  0.178  0.099  0.019 
CQC  0.327  0.176  0.108  0.023  0.320  0.183  0.103  0.025 
EQC  0.324  0.177  0.106  0.021  0.296  0.149  0.085  0.021 
LDA  0.302  0.160  0.106  0.367  0.196  0.027  0.000  0.036 
KNN  0.326  0.163  0.098  0.018  0.283  0.166  0.097  0.022 
PLR  0.301  0.161  0.104  0.013  0.194  0.044  0.018  0.008 
SVM  0.300  0.142  0.084  0.018  0.238  0.066  0.020  0.014 
Bayes  0.329  0.198  0.145  0.077  0.326  0.199  0.140  0.076 
Uncorrelated  Correlated  
Dimension  
Sample size  
DQC  0.313  0.170  0.096  0.052  0.306  0.169  0.095  0.059 
Centroid  0.323  0.212  0.145  0.097  0.330  0.220  0.140  0.113 
Median  0.334  0.206  0.147  0.105  0.334  0.215  0.140  0.106 
CQC  0.350  0.235  0.187  0.234  0.360  0.245  0.193  0.248 
EQC  0.340  0.180  0.089  0.015  0.333  0.178  0.102  0.019 
LDA  0.317  0.383  0.238  0.228  0.315  0.397  0.233  0.237 
KNN  0.382  0.275  0.210  0.064  0.364  0.282  0.213  0.078 
PLR  0.313  0.183  0.095  0.002  0.322  0.186  0.098  0.004 
SVM  0.330  0.224  0.150  0.021  0.332  0.240  0.150  0.021 
Bayes  0.378  0.281  0.220  0.153  0.377  0.272  0.223  0.161 
Sample size  
DQC  0.293  0.129  0.060  0.012  0.280  0.118  0.058  0.010 
Centroid  0.310  0.168  0.106  0.057  0.307  0.161  0.104  0.065 
Median  0.328  0.177  0.110  0.067  0.316  0.171  0.106  0.071 
CQC  0.317  0.173  0.110  0.135  0.314  0.160  0.113  0.137 
EQC  0.310  0.135  0.071  0.006  0.296  0.126  0.062  0.006 
LDA  0.301  0.218  0.374  0.084  0.281  0.203  0.395  0.089 
KNN  0.358  0.242  0.188  0.047  0.353  0.256  0.186  0.045 
PLR  0.300  0.163  0.079  0.001  0.284  0.153  0.078  0.001 
SVM  0.318  0.177  0.089  0.005  0.333  0.168  0.079  0.005 
Bayes  0.330  0.229  0.167  0.095  0.334  0.225  0.166  0.098 
Sample size  
DQC  0.273  0.097  0.038  0.000  0.265  0.097  0.035  0.000 
Centroid  0.282  0.119  0.059  0.007  0.275  0.116  0.056  0.008 
Median  0.295  0.128  0.074  0.017  0.286  0.124  0.069  0.019 
CQC  0.272  0.099  0.053  0.019  0.261  0.092  0.050  0.018 
EQC  0.267  0.088  0.035  0.001  0.244  0.079  0.032  0.001 
LDA  0.279  0.116  0.060  0.374  0.266  0.114  0.057  0.372 
KNN  0.323  0.206  0.140  0.016  0.310  0.207  0.140  0.015 
PLR  0.279  0.121  0.060  0.000  0.266  0.119  0.056  0.000 
SVM  0.283  0.109  0.046  0.000  0.274  0.107  0.044  0.000 
Bayes  0.273  0.129  0.080  0.020  0.266  0.125  0.077  0.021 
Uncorrelated  Correlated  
Dimension  
Sample size  
DQC  0.199  0.171  0.166  0.159  0.237  0.172  0.110  0.023 
Centroid  0.228  0.176  0.169  0.160  0.362  0.265  0.190  0.066 
Median  0.321  0.283  0.273  0.264  0.359  0.240  0.166  0.045 
Quantile  0.236  0.112  0.087  0.073  0.371  0.279  0.215  0.181 
EQC  0.315  0.279  0.256  0.234  0.349  0.239  0.162  0.051 
LDA  0.277  0.450  0.253  0.161  0.248  0.270  0.298  0.349 
KNN  0.277  0.213  0.192  0.173  0.365  0.284  0.214  0.074 
PLR  0.259  0.252  0.213  0.173  0.317  0.189  0.100  0.003 
SVM  0.231  0.175  0.170  0.159  0.338  0.240  0.157  0.018 
Bayes  0.229  0.132  0.123  0.106  0.373  0.288  0.227  0.145 
Sample size  
DQC  0.188  0.162  0.165  0.166  0.195  0.133  0.071  0.016 
Centroid  0.214  0.167  0.167  0.166  0.336  0.212  0.128  0.033 
Median  0.314  0.287  0.285  0.275  0.341  0.215  0.132  0.032 
Quantile  0.214  0.086  0.071  0.058  0.346  0.226  0.159  0.091 
EQC  0.296  0.254  0.256  0.241  0.334  0.198  0.132  0.039 
LDA  0.237  0.300  0.456  0.174  0.222  0.056  0.110  0.238 
KNN  0.246  0.204  0.191  0.184  0.351  0.251  0.189  0.044 
PLR  0.234  0.252  0.235  0.187  0.284  0.156  0.079  0.001 
SVM  0.221  0.169  0.170  0.166  0.323  0.174  0.083  0.004 
Bayes  0.187  0.112  0.105  0.095  0.343  0.232  0.169  0.092 
Sample size  
DQC  0.182  0.166  0.162  0.159  0.177  0.111  0.053  0.010 
Centroid  0.209  0.170  0.165  0.160  0.283  0.153  0.076  0.015 
Median  0.312  0.288  0.283  0.279  0.312  0.178  0.099  0.020 
Quantile  0.203  0.069  0.052  0.041  0.316  0.182  0.104  0.024 
EQC  0.282  0.249  0.241  0.236  0.293  0.148  0.085  0.021 
LDA  0.212  0.194  0.212  0.474  0.194  0.027  0.000  0.032 
KNN  0.193  0.173  0.176  0.178  0.311  0.206  0.138  0.015 
PLR  0.213  0.201  0.226  0.237  0.266  0.118  0.057  0.001 
SVM  0.209  0.172  0.167  0.163  0.274  0.107  0.043  0.001 
Bayes  0.164  0.102  0.096  0.086  0.269  0.126  0.080  0.019 
5 Clinical trial on Crohn’s disease
We analyse data from a matched casecontrol study in firstdegree relatives (FDRs) of Crohn’s disease (CD) patients originally published by Sorrentino et al. (2014). The goal of the study was to identify asymptomatic FDRs with early CD signs using several intestinal inflammatory markers. The latter included hemoglobin, erythrocyte sedimentation rate, Creactive protein, fecal calprotectin, and average mature ileum score. In our analysis, we grouped subjects into 2 classes, one with signs of inflammation ( subjects with early or frank CD) and one with normal values of markers ( subjects with no signs of inflammation, including healthy controls). In a separate analysis, we augment the dataset with 45 artificial markers generated from independent standard normal distributions to investigate the impact of uninformative noise on the performance of the DQC. We approach data analysis with leaveoneout validation and evaluate the misclassification rate as the proportion of subjects that are misclassified when each is left out of analysis.
We estimated the classification error for all the classifiers as included in our simulation study (Section 4). The results are reported in Table 5. The proposed DQC outperforms its competitors in both the original () and noisy () versions of the dataset.
DQC  0.229  0.229 

Centroid  0.286  0.286 
Median  0.400  0.400 
CQC  0.314  0.343 
EQC  0.314  0.314 
LDA  0.257  0.543 
KNN  0.371  0.343 
PLR  0.286  0.343 
SVM  0.257  0.257 
Bayes  0.286  0.257 
6 Conclusions
We proposed directional quantile classifiers whose predictive ability is consistently good in both simulation and real data studies, on small and large dimensional classification problems. In particular, the empirical results show that our approach either outperforms its competitors or, when this is not the case, its performance is still in the ballpark of that of the best classifiers. Such a reliable behaviour across different scenarios is not shared by the other selected classifiers. Moreover, the directional quantile classifiers enjoy optimal theoretical properties under certain assumptions.
A limitation of the approach is that the number of directions needed to span a sphere with a regular grid becomes prohibitive already at modest values of . On the other hand, our theoretical results indicate that one can sample directions from an optimal hyperplane, thus reducing the computational burden, but not at the expense of the classifier’s performance. Our strategy allows us to balance the importance of quantile levels and directions used for classification by means of weights, which can be optimised using a convenient closedform expression.
Appendix A  Proofs of Theorems
a.1 Proofs of Lemma 1 and Theorem 1
Proof.
The proofs of Lemma 1 and Theorem 1 follow the arguments given in Hennig and Viroli (2016a, Supplementary Material). Here, we briefly sketch the main idea. The optimal value that minimises the theoretical misclassification probability can be obtained by setting the first derivative of (10) to zero, from which
By assumption, there exists such that . Hence, the identity above is satisfied because and are continuous functions of that converge to the lower and upper bound of for approaching either 0 or 1, respectively. Furthermore, under the assumptions of Theorem 1, the optimal Bayesian classifier has a single decision boundary at . ∎
a.2 Proof of Lemma 2
Proof.
Without loss of generality, assume . Let and consider three possible, distinct cases: , , and .
If , then
by definition. If , then
Finally, if , then
∎
a.3 Proof of Theorem 2
a.4 Proof of Theorem 3
Proof.
Let be the empirical quantile computed on the projected training data . We write
where . Let denote the vector of quantiles of , and put for and write . By the triangular inequality
where and satisfy , . Hence
where , and .
Given the convergence of the empirical quantiles to the respective population quantiles, it follows that
for any , where
Now define
Given , let denote the set of indices such that
Comments
There are no comments yet.