The analysis of the stochastic behavior of empirical processes is a key ingredient in learning theory. The supremum of empirical processes is of particular interest, playing a central role in various application areas, including the theory of empirical processes, VC theory, and Rademacher complexity theory, to name only a few. Powerful Bennett-type concentration inequalities on the sup-norm of empirical processes introduced in [Talagrand ] (see also [Bousquet [2002a]]
) are at the heart of many recent advances in statistical learning theory, includinglocal Rademacher complexities (Bartlett et al., 2005; Koltchinskii, 2011b) and related localization strategies (cf. Steinwart and Christmann, 2008, Chapter 4), which can yield fast rates of convergence on the excess risk.
These inequalities are based on the assumption of independent and identically distributedrandom variables, commonly assumed in the inductive setting of learning theory and thus implicitly underlying many prevalent machine learning algorithms such as support vector machines (Cortes and Vapnik, 1995; Steinwart and Christmann, 2008). However, in many cases the i.i.d. assumption breaks and substitutes for Talagrand’s inequality are required. For instance, the i.i.d. assumption is violated when training and test data come from different distributions or data points exhibit (e.g., temporal) interdependencies (e.g., Steinwart et al., 2009). Both scenarios are typical situations in visual recognition, computational biology, and many other application areas.
Another example where the i.i.d. assumption is void—in the focus of the present paper—is the transductive setting of learning theory, where training examples are sampled independent and without
replacement from a finite population, instead of being sampled i.i.d. with replacement. The learner in this case is provided with both a labeled training set and an unlabeled test set, and the goal is to predict the label of the test points. This setting naturally appears in almost all popular application areas, including text mining, computational biology, recommender systems, visual recognition, and computer malware detection, as effectively constraints are imposed on the samples, since they are inherently realized within the global system of our world. As an example, consider image categorization, which is an important task in the application area of visual recognition. An object of study here could be the set of all images disseminated in the internet, only some of which are already reliably labeled (e.g., by manual inspection by a human), and the goal is to predict the unknown labels of the unlabeled images, in order to, e.g., make them accessible to search engines for content-based image retrieval.
From a theoretical view, however, the transductive learning setting is yet not fully understood. Several transductive error bounds were presented in series of works (Vapnik, 1982, 1998; Blum and Langford, 2003; Derbeko et al., 2004; Cortes and Mohri, 2006; El-Yaniv and Pechyony, 2009; Cortes et al., 2009), including the first analysis based on global Rademacher complexities presented in El-Yaniv and Pechyony (2009). However, the theoretical analysis of the performance of transductive learning algorithms still remains less illuminated than in the classic inductive setting: to the best of our knowledge, existing results do not provide fast rates of convergence in the general transductive setting.111 An exception are the works of Blum and Langford (2003); Cortes and Mohri (2006), which consider, however, the case where the Bayes hypothesis has zero error and is contained in the hypothesis class. This is clearly an assumption too restrictive in practice, where the Bayes hypothesis usually cannot be assumed to be contained in the class.
In this paper, we consider the transductive learning setting with arbitrary bounded nonnegative loss functions. The main result is an excess risk bound for transductive learning based on the localized complexity of the hypothesis class. This bound holds under general assumptions on the loss function and hypothesis class and can be viewed as a transductive analogue of Corollary 5.3 inBartlett et al. (2005)
. The bound is very generally applicable with loss functions such as the squared loss and common hypothesis classes. By exemplarily applying our bound to kernel classes, we achieve, for the first time in the transductive setup, an excess risk bound in terms of the tailsum of the eigenvalues of the kernel, similar to the best known results in the inductive setting. In addition, we also provide new transductive generalization error bounds that take the variances of the functions into account, and thus can yield sharper estimates.
The localized excess risk bound is achieved by proving two novel concentration inequalities for suprema of empirical processes when sampling without replacement. The application of which goes far beyond the transductive learning setting—these concentration inequalities could serve as a fundamental mathematical tool in proving results in various other areas of machine learning and learning theory. For instance, arguably the most prominent example in machine learning and learning theory of an empirical process where sampling without replacement is employed is cross-validation (Stone, 1974), where training and test folds are sampled without replacement from the overall pool of examples, and the new inequalities could help gaining a non-asymptotic understanding of cross-validation procedures. However, the investigation of further applications of the novel concentration inequalities beyond the transductive learning setting is outside of the scope of the present paper.
2 The Transductive Learning Setting and State of the Art
From a statistical point of view, the main difference between the transductive and inductive learning settings lies in the protocols used to obtain the training sample . Inductive learning assumes that the training sample is drawn i.i.d. from some fixed and unknown distribution on the product space , where is the input space and is the output space. The learning algorithm chooses a predictor from some fixed hypothesis set based on the training sample, and the goal is to minimize the true risk for a fixed, bounded, and nonnegative loss function .
We will use one of the two transductive settings222 The second setting assumes that both training and test sets are sampled i. i. d. from the same unknown distribution and the learner is provided with the labeled training and unlabeled test sets. It is pointed out by Vapnik (1998) that any upper bound on in the setting we consider directly implies a bound also for the second setting. considered in Vapnik (1998), which is also used in Derbeko et al. (2004); El-Yaniv and Pechyony (2009). Assume that a set consisting of arbitrary input points is given (without any assumptions regarding its underlying source). We then sample objects uniformly without replacement from (which makes the inputs in dependent). Finally, for the input examples we obtain their outputs by sampling, for each input , the corresponding output from some unknown distribution . The resulting training set is denoted by . The remaining unlabeled set , is the test set. Note that both Derbeko et al. (2004) and El-Yaniv and Pechyony (2009) consider a special case where the labels are obtained using some unknown but deterministic target function so that . We will adopt the same assumption here. The learner then chooses a predictor from some fixed hypothesis set (not necessarily containing ) based on both the labeled training set and unlabeled test set . For convenience let us denote . We define the test and training error, respectively, of hypothesis as follows: , , where hat emphasizes the fact that the training (empirical) error can be computed from the data. For technical reasons that will become clear later, we also define the overall error of an hypothesis with regard to the union of the training and test sets as (this quantity will play a crucial role in the upcoming proofs). Note that for a fixed hypothesis the quantity is not random, as it is invariant under the partition into training and test sets. The main goal of the learner in transductive setting is to select a hypothesis minimizing the test error , which we will denote by .
Since the labels of the test set examples are unknown, we cannot compute and need to estimate it based on the training sample . A common choice is to replace the test error minimization by empirical risk minimization and to use its solution, which we denote by , as an approximation of . For let us define an excess risk of :
A natural question is: how well does the hypothesis chosen by the ERM algorithm approximate the theoretical-optimal hypothesis ?
To this end, we use as a measure of the goodness of fit. Obtaining tight upper bounds on —so-called excess risk bounds—is thus the main goal of this paper. Another goal commonly considered in learning literature is the one of obtaining upper bounds on in terms of , which measures the generalization performance of empirical risk minimization. Such bounds are known as the generalization error bounds. Note that both and are random, since they depend on the training and test sets, respectively. Note, moreover, that for any fixed its excess risk
is also random. Thus both tasks (of obtaining excess risk and generalization bounds, respectively) deal with random quantities and require bounds that hold with high probability.
The most common way to obtain generalization error bounds for is to introduce uniform deviations over the class :
The random variable appearing on the right side is directly related to the sup-norm of the empirical process (Boucheron et al., 2013). It should be clear that, in order to analyze the transductive setting, it is of fundamental importance to obtain high-probability bounds for functions , where are random variables sampled without replacement from some fixed finite set. Of particular interest are concentration inequalities for sup-norms of empirical processes, which we present in Section 3.
2.1 State of the Art and Related Work
Error bounds for transductive learning were considered by several authors in recent years. Here we name only a few of them333 For an extensive review of transductive error bounds we refer to Pechyony (2008). . The first general bound for binary loss functions, presented in Vapnik (1982), was implicit in the sense that the value of the bound was specified as an outcome of a computational procedure. The somewhat refined version of this implicit bound also appears in Blum and Langford (2003). It is well known that generalization error bounds with fast rates of convergence can be obtained under certain restrictive assumptions on the problem at hand. For instance, Blum and Langford (2003) provide a bound that has an order of in the realizable case, i.e., when (meaning that the hypothesis having zero error belongs to ). However, such an assumption is usually unrealistic: in practice it is usually impossible to avoid overfitting when choosing
so large that it contains the Bayes classifier.
The authors of Cortes and Mohri (2006) consider a transductive regression problem with bounded squared loss and obtain a generalization error bound of the order , which also does not attain a fast rate. Several PAC-Bayesian bounds were presented in Blum and Langford (2003); Derbeko et al. (2004) and others. However their tightness critically depends on the prior distribution over the hypothesis class, which should be fixed by the learner prior to observing the training sample. Transductive bounds based on algorithmic stability were presented for classification in El-Yaniv and Pechyony (2006) and for regression in Cortes et al. (2009). However both are roughly of the order . Finally, we mention the results of El-Yaniv and Pechyony (2009) based on transductive Rademacher complexities. However, the analysis was based on the global Rademacher complexity combined with a McDiarmid-style concentration inequality for sampling without replacement and thus does not yield fast convergence rates.
3 Novel Concentration Inequalities for Sampling Without replacement
In this section, we present two new concentration inequalities for suprema of empirical processes when sampling without replacement. The first one is a sub-Gaussian inequality that is based on a result by Bobkov (2004) and closely related to the entropy method (Boucheron et al., 2013). The second inequality is an analogue of Bousquet’s version of Talagrand’s concentration inequality (Bousquet, 2002b, a; Talagrand, 1996) and is based on the reduction method first suggested in Hoeffding (1963).
Next we state the setting and introduce the necessary notation. Let be some finite set. For let and be sequences of random variables sampled uniformly without and with replacement from , respectively. Let be a (countable444 Note that all results can be translated to the uncountable classes, for instance, if the empirical process is separable, meaning that contains a dense countable subset. We refer to page 314 of Boucheron et al. (2013) or page 72 of Bousquet (2002b). ) class of functions , such that and for all and . It follows that since and are identically distributed. Define the variance . Note that . Let us define the supremum of the empirical process for sampling with and without replacement, respectively:555 The results presented in this section can be also generalized to using the same techniques.
The random variable is well studied in the literature and there are remarkable Bennett-type concentration inequalities for , including Talagrand’s inequality (Talagrand, 1996) and its versions due to Bousquet (2002a, b) and others.666 For completeness we present one such inequality for as Theorem A in Appendix A. For the detailed review of concentration inequalities for we refer to Section 12 of Boucheron et al. (2013). The random variable , on the other hand, is much less understood, and no Bennett-style concentration inequalities are known for it up to date.
3.1 The New Concentration Inequalities
In this section, we address the lack of Bennett-type concentration inequalities for by presenting two novel inequalities for suprema of empirical processes when sampling without replacement. [Sub-Gaussian concentration inequality for sampling without replacement] For any ,
The same bound also holds for . Also for all the following holds with probability greater than :
[Talagrand-type concentration inequality for sampling without replacement] Define . For define , . Then for any :
Also for any following holds with probability greater than :
The appearance of in the last theorem might seem unexpected on the first view. Indeed, one usually wants to control the concentration of a random variable around its expectation. However, it is shown in the lemma below that in many cases will be close to :
The above lemma is proved in Appendix B. It shows that for the order of does not exceed , and thus Theorem 3.1 can be used to control the deviations of above its expectation at a fast rate. However, generally could be smaller than , which may potentially lead to significant gap, in which case Theorem 3.1 is the preferred choice to control the deviations of around .
It is worth comparing the two novel inequalities for to the best known results in the literature. To this end, we compare our inequalities with the McDiarmid-style inequality recently obtained in El-Yaniv and Pechyony (2009) (and slightly improved in Cortes et al. (2009)): [El-Yaniv and Pechyony (2009)777 This bound does not appear explicitly in El-Yaniv and Pechyony (2009); Cortes et al. (2009), but can be immediately obtained using Lemma 2 of El-Yaniv and Pechyony (2009) for with . ] For all :
The same bound also holds for .
To begin with, let us notice that Theorem 7 does not account for the variance , while Theorems 3.1 and Theorem 3.1 do. As it will turn out in Section 4, this refined treatment of the variance allows us to use localization techniques, facilitating to obtain sharp estimates (and potentially, fast rates) also in the transductive learning setup. The comparison between concentration inequalities (2) of Theorem 3.1 and (4) of Theorem 7 is as follows: note that the term is negligible for large , so that slightly re-writing the inequalities boils down to comparing and For (which in a way transforms sampling without replacement to sampling with replacement), the second inequality clearly outperforms the first one. However, for the case when (frequently used in the transductive setting), say , the comparison depends on the relation between and and the result of Theorem 3.1 outperforms the one of El-Yaniv and Pechyony for . The comparison between Theorems 3.1 and 7 for both cases ( and ) depends on the value of .
Theorem 3.1 is a direct analogue of Bousquet’s version of Talagrand’s inequality (see Theorem A in Appendix A of the supplementary material), frequently used in the learning literature. It states that the upper bound on , provided by Bousquet’s inequality, also holds for . Now we compare Theorems 3.1 and 3.1. First of all note that the deviation bound (3) does not have the term under the square root in contrast to Theorem 3.1. As will be shown later, in some cases this fact can result in improved constants when applying Theorem 3.1. Another nice thing about Theorem 3.1 is that it provides upper bounds for both and , while Theorem 3.1 upper bounds only . The main drawback of Theorem 3.1 is the factor appearing in the exponent. Later we will see that in some cases it is more preferable to use Theorem 3.1 because of this fact.
We also note that, if or , we can control the deviations of around with inequalities that are similar to i.i.d. case. It is an open question, however, whether this can be done also for other regimes of and . It should be clear though that we can obtain at least as good rates as in the inductive setting using Theorem 3.1. To summarize the discussion, when is large and , Theorems 3.1 or 7 (depending on and the order of ) can be significantly tighter than Theorem 3.1. However, if , Theorem 3.1 is more preferable. Further discussions are presented in Appendix C.
3.3 Proof Sketch
Theorem 3.1 is a direct consequence of Bousquet’s inequality and Hoefding’s reduction method. It was shown in Theorem 4 of Hoeffding (1963) that, for any convex function , the following inequality holds:
. This reduction to the i.i.d. setting together with some minor technical results is enough to bound the moment generating function ofand obtain a concentration inequality using Chernoff’s method (for which we refer to the Section 2.2 of Boucheron et al. (2013)).
The proof of Theorem 3.1 is more involved. It is based on the sub-Gaussian inequality presented in Theorem 2.1 of Bobkov (2004), which is related to the entropy method introduced by M. Ledoux (see Boucheron et al. (2013) for references). Consider a function defined on the partitions of a fixed finite set of cardinality into two disjoint subsets and of cardinalities and , respectively, where . Bobkov’s inequality states that, roughly speaking, if is such that the Euclidean length of its discrete gradient is bounded by a constant , and if the partitions are sampled uniformly from the set of all such partitions, then is sub-Gaussian with parameter .
3.4 Applications of the New Inequalities
The novel concentration inequalities presented above can be generally used as a mathematical tool in various areas of machine learning and learning theory where suprema of empirical processes over sampling without replacement are of interest, including the analysis of cross-validation and low-rank matrix factorization procedures as well as the transductive learning setting. Exemplifying their applications, we show in the next section—for the first time in the transductive setting of learning theory—excess risk bounds in terms of localized complexity measures, which can yield sharper estimates than global complexities.
4 Excess Risk Bounds for Transductive Learning via Localized Complexities
We start with some preliminary generalization error bounds that show how to apply the concentration inequalities of Section 3 to obtain risk bounds in the transductive learning setting. Note that (1) can be written in the following way:
where we used the fact that . Note that for any fixed , we have where is sampled uniformly without replacement from . Note that we clearly have and . Thus we can use the setting described in Section 3, with playing the role of and considering the function class associated with , to obtain high-probability bounds for Note that in Section 3 we considered unnormalized sums, hence we obtain a factor of in the above equation. As already noted, for fixed , is not random; also centering random variable does not change its variance. Keeping this in mind, we define
Using Theorems 3.1 and 3.1, we can obtain the following results that hold without any assumptions on the learning problem at hand, except for the boundedness of the loss function in the interval . Our first result of this section follows immediately from the new concentration inequality of Theorem 3.1: For any with probability greater than the following holds:
where was defined in (5). Let be random variables sampled with replacement from and denote
The following result follows from Theorem 3.1 by simple calculus. We provide the detailed proof in the supplementary material. For any with probability greater than the following holds:
where was defined in (5). Note that is an expected sup-norm of the empirical process naturally appearing in inductive learning. Using the well-known symmetrization inequality (see Section 11.3 of Boucheron et al. (2013)), we can upper bound it by twice the expected value of the supremum of the Rademacher process. In this case, the last theorem thus gives exactly the same upper bound on the quantity as the one of Theorem 2.1 of Bartlett et al. (2005) (with and ). Here we provide some discussion on the two generalization error bounds presented above. Note that , since is the variance of a random variable bounded in the interval . We conclude that the bound of Theorem 4 is of the order , since the typical order888 For instance if is finite it follows from Theorems 2.1 and 3.5 of Koltchinskii (2011a). of is also . Note that repeating the proof of Lemma 3.1 we immediately obtain the following corollary: Let be random variables sampled with replacement from . For any countable set of functions defined on the following holds:
The corollary shows that for the bound of Theorem 4 also has the order . However, if , the convergence becomes slower and it can even diverge for . The last corollary enables us to use also in the transductive setting all the established techniques related to the inductive Rademacher process, including symmetrization and contraction inequalities. Later in this section, we will employ this result to derive excess risk bounds for kernel classes in terms of the tailsum of the eigenvalues of the kernel, which can yield a fast rate of convergence. However, we should keep in mind that there might be a significant gap between and , in which case such a reduction can be loose.
4.1 Excess Risk Bounds
The main goal of this section is to analyze to what extent the known results on localized risk bounds presented in series of works (Koltchinskii and Panchenko, 1999; Massart, 2000; Bartlett et al., 2005; Koltchinskii, 2006) can be generalized to the transductive learning setting. These results essentially show that the rate of convergence of the excess risk is related to the fixed point of the modulus of continuity of the empirical process associated with the hypothesis class. Our main tools to this end will be the sub-Gaussian and Bennett-style concentration inequalities of Theorems 3.1 and 3.1 presented in the previous section.
From now on it will be convenient to introduce the following operators, mapping functions defined on to :
Using this notation we have: and .
Define the excess loss class . Throughout this section, we will assume that the loss function and hypothesis class satisfy the following assumptions:
There is a function satisfying .
There is a constant such that for every we have .
As before the loss function is bounded in the interval .
Here we shortly discuss these assumptions. Assumption 4.1.1 is quite common and not restrictive. Assumption 4.1.2 can be satisfied, for instance, when the loss function is Lipschitz and there is a constant such that for all . These conditions are satisfied for example for the quadratic loss with uniformly bounded convex classes (for other examples we refer to Section 5.2 of Bartlett et al. (2005) and Section 2.1 of Bartlett et al. (2010)). Assumption 4.1.3 could be possibly relaxed using some analogues of Theorems 3.1 and 3.1 that hold for classes with unbounded functions999 Adamczak (2008) show a version of Talagrand’s inequality for unbounded functions in the i.i.d. case. .
Next we present the main results of this section, which can be considered as an analogues of Corollary 5.3 of Bartlett et al. (2005). The results come in pairs, depending on whether Theorem 3.1 or 3.1 is used in the proof. We will need the notion of a sub-root function, which is a nondecreasing and nonnegative function , such that is nonincreasing for . It can be shown that any sub-root function has a unique and positive fixed point. Let and be such that Assumptions 4.1 are satisfied. Assume there is a sub-root function such that
Let be a fixed point of . Then for any with probability greater than we have:
We emphasize that the constants appearing in Theorem 4.1 are slightly better than the ones appearing in Corollary 5.3 of Bartlett et al. (2005). This result is based on Theorem 3.1 and thus shares the disadvantages of Theorem 4 discussed above: the bound does not converge for . However, by using Theorem 3.1 instead of Theorem 3.1 in the proof, we can replace the factor of appearing in the bound by a factor of at a price of slightly worse constants: Let and be such that Assumptions 4.1 are satisfied. Let be random variables sampled with replacement from . Assume there is a sub-root function such that
Let be a fixed point of . Then for any , with probability greater than , we have:
We also note that in Theorem 4.1 the modulus of continuity of the empirical process over sampling without replacement appearing in the left-hand side of (6) is replaced with its inductive analogue. As follows from Corollary 8, the fixed point of Theorem 4.1 can be smaller than that of Theorem 4.1 and thus, for large and the first bound can be tighter. Otherwise, if , Theorem 4.1 can be more preferable.
Proof sketch: Now we briefly outline the proof of Theorem 4.1. It is based on the peeling technique and consists of the steps described below (similar to the proof of the first part of Theorem 3.3 in Bartlett et al. (2005)). The proof of Theorem 4.1 repeats the same steps with the only difference being that Theorem 3.1 is used on Step 2 instead of Theorem 3.1. The detailed proofs are presented in Section D of the supplementary material.
First we fix an arbitrary and introduce the rescaled version of the centered loss class: , where is chosen such that the variances of the functions contained in do not exceed .
We can use Theorem 3.1 to obtain the following upper bound on which holds with probability greater than : .
Using the peeling technique (which consists in dividing the class into slices of functions having variances within a certain range), we are able to show that . Also, using the definition of sub-root functions, we conclude that for any , which gives us .
Now we can show that by properly choosing we can get that, for any , it holds . Using the definition of , we obtain that the following holds, with probability greater than :
Finally it remains to upper bound for the two cases and (which can be done using Assumption 4.1.2), to combine those two results, and to notice that for .
The following version is based on Theorem 4.1 and replaces the factors and appearing in the previous excess risk bound by and , respectively: Under the assumptions of Theorem 4.1, for any , with probability greater than , we have:
Proof sketch Corollary 4.1 can be proved by noticing that is an empirical risk minimizer (similar to , but computed on the test set). Thus, repeating the proof of Theorem 4.1, we immediately obtain the same bound for as in Theorem 4.1 with and replaced by and , respectively. This shows that the overall errors and are close to each other. It remains to apply an intermediate step, obtained in the proof of Theorem 4.1. Corollary 4.1 is proved in a similar way. The detailed proofs are presented in Appendix D.
In order to get a more concrete grasp of the key quantities and in Corollary 4.1, we can directly apply the machinery developed in the inductive case by Bartlett et al. (2005) to get an upper bound. For concreteness, we consider below the case of a kernel class. Observe that, by an application of Corollary 8 to the left-hand side of (6), the bounds below for the inductive of Corollary 4.1 are valid as well for their transductive siblings of Corollary 4.1; though by doing so we lose essentially any potential advantage (apart from tighter multiplicative constants) of using Theorem 4.1/Corollary 4.1 over Theorem 4.1/Corollary 4.1. As pointed out in Remark 4, the regime of sampling without replacement could lead potentially to an improved bound (at least when ). Whether it is possible to take advantage of this fact and develop tighter bounds specifically for the fixed point of (6) is an open question and left for future work.
Let be a positive semidefinite kernel on with , and the associated reproducing kernel Hilbert space. Let , and the associated excess loss class. Suppose that Assumptions 4.1 are satisfied and assume moreover that the loss function is -Lipschitz in its first variable and also that for all . Let be the normalized kernel Gram matrix with entries , where ; denote its ordered eigenvalues. Then, for or :
, the only important point being that the generating distribution is the uniform distribution on. Similar to the discussion there, we note that and are at most of order and , respectively, and possibly much smaller if the eigenvalues have a fast decay .
The question of transductive convergence rates is somewhat delicate, since all results stated here assume a fixed set , as reflected for instance in the bound of Corollary 4.1 depending on the eigenvalues of the kernel Gram matrix of the set . In order to give a precise meaning to rates, one has to specify how evolves as grows. A natural setting for this is Vapnik (1998)’s second transductive setting where is i.i.d. from some generating distribution. In that case we think it is possible to adapt once again the results of Bartlett et al. (2005) in order to relate the quantities to asymptotic counterparts as , though we do not pursue this avenue in the present work.
In this paper, we have considered the setting of transductive learning over a broad class of bounded and nonnegative loss functions. We provide excess risk bounds for the transductive learning setting based on the localized complexity of the hypothesis class, which hold under general assumptions on the loss function and the hypothesis class. When applied to kernel classes, the transductive excess risk bound can be formulated in terms of the tailsum of the eigenvalues of the kernels, similar to the best known estimates in inductive learning. The localized excess risk bound is achieved by proving two novel and very general concentration inequalities for suprema of empirical processes when sampling without replacement, which are of potential interest also in various other application areas in machine learning and learning theory, where they may serve as a fundamental mathematical tool.
For instance, sampling without replacement is commonly employed in the Nyström method (Kumar et al., 2012)
, which is an efficient technique to generate low-rank matrix approximations in large-scale machine learning. Another potential application area of our novel concentration inequalities could be the analysis of randomized sequential algorithms such as stochastic gradient descent and randomized coordinate descent, practical implementations of which often deploy sampling without replacement(Recht and Re, 2012). Very interesting also would be to explore whether the proposed techniques could be used to generalize matrix Bernstein inequalities (Tropp, 2012) to the case of sampling without replacement, which could be used to analyze matrix completion problems (Koltchinskii et al., 2011). The investigation of application areas beyond the transductive learning setting is, however, outside of the scope of the present paper.
The authors are thankful to Sergey Bobkov, Stanislav Minsker, and Mehryar Mohri for stimulating discussions and to the anonymous reviewers for their helpful comments. Marius Kloft acknowledges a postdoctoral fellowship by the German Research Foundation (DFG).
A tail inequality for suprema of unbounded empirical processes with applications to markov chains.Electronic Journal of Probability, 34(13), 2008.
- Bardenet and Maillard  R. Bardenet and O.-A. Maillard. Concentration inequalities for sampling without replacement. http://arxiv.org/abs/1309.4029, 2013.
- Bartlett et al.  P. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. The Annals of Statistics, 33(4):1497 1537, 2005.
- Bartlett et al.  P. L. Bartlett, S. Mendelson, and P. Phillips. On the optimality of sample-based estimates of the expectation of the empirical minimizer. ESAIM: Probability and Statistics, 2010.
Blum and Langford 
A. Blum and J. Langford.
Proceedings of the International Conference on Computational Learning Theory (COLT), 2003.
Concentration of normalized sums and a central limit theorem for noncorrelated random variables.Annals of Probability, 32, 2004.
- Boucheron et al.  S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
- Bousquet [2002a] O. Bousquet. A Bennett concentration inequality and its application to suprema of empirical processes. C. R. Acad. Sci. Paris, Ser. I, 334:495–500, 2002a.
- Bousquet [2002b] O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002b.
- Cortes and Mohri  C. Cortes and M. Mohri. On transductive regression. In Advances in Neural Information Processing Systems (NIPS), 2006.
- Cortes and Vapnik  C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20:273–297, September 1995. ISSN 0885-6125.
- Cortes et al.  C. Cortes, M. Mohri, D. Pechyony, and A. Rastogi. Stability analysis and learning bounds for transductive regression algorithms. http://arxiv.org/abs/0904.0814, 2009.
Derbeko et al. 
P. Derbeko, R. El-Yaniv, and R. Meir.
Explicit learning curves for transduction and application to
clustering and compression algorithms.
Journal of Artificial Intelligence Research, 22, 2004.
- El-Yaniv and Pechyony  R. El-Yaniv and D. Pechyony. Stable transductive learning. In Proceedings of the International Conference on Computational Learning Theory (COLT), 2006.
- El-Yaniv and Pechyony  R. El-Yaniv and D. Pechyony. Transductive rademacher complexity and its applications. Journal of Artificial Intelligence Research, 2009.
- Gross and Nesme  D. Gross and V. Nesme. Note on sampling without replacing from a fnite collection of matrices. http://arxiv.org/abs/1001.2738v2, 2010.
- Hoeffding  W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963.
- Klein and Rio  T. Klein and E. Rio. Concentration around the mean for maxima of empirical processes. The Annals of Probability, 33(3):1060 1077, 2005.
- Koltchinskii  V. Koltchinskii. Local rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593 2656, 2006.
- Koltchinskii [2011a] V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École D Été de Probabilités de Saint-Flour XXXVIII-2008. Ecole d’été de probabilités de Saint-Flour. Springer, 2011a.
- Koltchinskii [2011b] V. Koltchinskii. Oracle inequalities in empirical risk minimization and sparse recovery problems. École d’été de probabilités de Saint-Flour XXXVIII-2008. Springer Verlag, 2011b.
- Koltchinskii and Panchenko  V. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning. In D. E. Gine and J.Wellner, editors, High Dimensional Probability, II, pages 443–457. Birkhauser, 1999.
- Koltchinskii et al.  V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Annals of Statistics, 39:2302–2329, 2011. doi: 10.1214/11-AOS894.
- Kumar et al.  S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the nyström method. Journal of Machine Learning Research, 13(1):981–1006, Apr. 2012.
- Massart  P. Massart. Some applications of concentration inequalities to statistics. Ann. Fac. Sci. Toulouse Math., 9(6):245–303, 2000.
- Mendelson  S. Mendelson. On the performance of kernel classes. J. Mach. Learn. Res., 4:759–771, December 2003.
- Pechyony  D. Pechyony. Theory and Practice of Transductive Learning. PhD thesis, Technion, 2008.
Recht and Re 
B. Recht and C. Re.
Toward a noncommutative arithmetic-geometric mean inequality: Conjectures, case-studies, and consequences.In COLT, 2012.
- Serfling  R. J. Serfling. Probability inequalities for the sum in sampling without replacement. The Annuals of Statistics, 2(1):39–48, 1974.
- Steinwart and Christmann  I. Steinwart and A. Christmann. Support Vector Machines. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN 0387772413.
- Steinwart et al.  I. Steinwart, D. R. Hush, and C. Scovel. Learning from dependent observations.
- Stone  M. Stone. Cross-validatory choice and assessment of statistical predictors (with discussion). Journal of the Royal Statistical Society, B36:111–147, 1974.
- Talagrand  M. Talagrand. New concentration inequalities in product spaces. Inventiones Mathematicae, 126, 1996.
- Tropp  J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012.
- Vapnik  V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag New York, Inc., 1982.
- Vapnik  V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
Appendix A Bousquet’s version of Talagrand’s concentration inequality
We also have for any :
Noting that for , one can derive the following more illustrative version:
Also for all the following holds with probability greater than :
Note that Bousquet [2002a] provides similar bounds for .
Appendix B Proofs from Section 3
First we are going to prove Theorem 3.1, which is a direct consequence of Bousquet’s inequality of Theorem A. It is based on the following reduction theorem due to Hoeffding : [Hoeffding ] 101010Hoeffding initially stated this result only for real valued random variables. However all the steps of proof hold also for vector-valued random variables. For the reference see Section D of Gross and Nesme . Let and be sampled uniformly from a finite set of -dimensional vectors with and without replacement, respectively. Then, for any continuous and convex function , the following holds:
Also we will need the following technical lemma: Let . Then the following function is convex for all
Let us show that, if is a convex and nondecreasing function and is convex, then is also convex. Indeed, for and :
Considering the fact that is convex and increasing for , it remains to show that is convex. For all and , the following holds:
which concludes the proof.
We will proove Theorem 3.1 for a finite class of functions . The result for uncountable case follows by taking a limit of a sequence of finite sets.
Using Chernoff’s method, we obtain for all and :