Local Rademacher Complexity Bounds based on Covering Numbers

10/06/2015 ∙ by Yunwen Lei, et al. ∙ Wuhan University NetEase, Inc 0

This paper provides a general result on controlling local Rademacher complexities, which captures in an elegant form to relate the complexities with constraint on the expected norm to the corresponding ones with constraint on the empirical norm. This result is convenient to apply in real applications and could yield refined local Rademacher complexity bounds for function classes satisfying general entropy conditions. We demonstrate the power of our complexity bounds by applying them to derive effective generalization error bounds.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning refers to a process of inferring the underlying relationship among input-output variables from a previously chosen hypothesis class , on the basis of some scattered, noisy examples [11, 29]. Generalization analysis on learning algorithms stands a central place in machine learning since it is important to understand the factors influencing models’ behavior, as well as to suggest ways to improve them [5, 2, 7, 6, 3, 20]. One seminar example can be found in the multiple kernel learning (MKL) context, where Cortes et al. [7] established a framework showing how the generalization analysis in [12, 25, 13] could motivate two novel MKL algorithms.

Vapnik and Chervonenkis [30] pioneered the research on learning theory by relating generalization errors to the supremum of an empirical process: , where is the associated loss class induced from the hypothesis space, and

are the true probability measure and the empirical probability measure, respectively. It was then indicated that this supremum is closely connected with the “size” of the space

 [29, 30]. For a finite class of functions, its size can be simply measured by its cardinality.  Vapnik [29] provided a novel concept called VC dimension to characterize the complexity of -valued function classes, by noticing that the quantity of significance is the number of points acquired when projecting the function class onto the sample. Other quantities like covering numbers, which measure the number of balls required to cover the original class, have been introduced to capture, on a finer scale, the “size” of real-valued function classes [33, 8, 34, 14]

. With the recent development in concentration inequalities and empirical process theory, it is possible to obtain a slightly tighter estimate on the “size” of

through the remarkable concept called Rademacher complexity [1, 2, 32, 15].

However, all the above mentioned approaches provide only global estimates on the complexity of function classes, and they do not reflect how a learning algorithm explores the function class and interacts with the examples [5, 4]. Moreover, they are bound to control the deviation of empirical errors from the true errors simultaneously over the whole class, while the quantity of primary importance is only that deviation for the particular function picked by the learning algorithm, which may be far from reaching this supremum [2, 16, 26]. Therefore, the analysis based on a global complexity would give a rather conservative estimate. On the other hand, most learning algorithms are inclined towards choosing functions possessing small empirical errors and hopefully also small generalization errors [5]

. Furthermore, if there holds a relationship between variances and expectations like

, these functions will also admit small variances. That is to say, the obtained prediction rule is likely to fall into a subclass with small variances [2]. Due to the seminar work of Koltchinskii and Panchenko [16] and Massart [22], it turns out that the notion of Rademacher complexity can be naturally modified to take this into account, yielding the so-called local Rademacher complexity [16]. Since local Rademacher complexity is always smaller than the global counterpart, the discussion based on local Rademacher complexities always yields significantly better learning rates under the variance-expectation conditions.

Mendelson [24, 23] initiated the discussion of estimating local Rademacher complexities with covering numbers and these complexity bounds are very effective in establishing fast learning rates. However, the discussions in [24, 23] are somewhat dispersed in the sense that the author did not provided a general result applicable to all function classes. Indeed, Mendelson [24, 23] derived local Rademacher complexity bounds for several function classes satisfying different entropy conditions case-by-case, and the involved deduction also relies on the specific entropy conditions. Mendelson [25] also derived, for a general Reproducing Kernel Hilbert Space

(RKHS), an interesting local Rademacher complexity bound based on the eigenvalues of the associated integral operator, which was later generalized to

-norm MKL context [12, 13, 21]. These results are exclusively developed for RKHSs and it still remains unknown whether they could be extended to general function classes. In this paper, we try to refine these discussions by providing some general and sharp results on controlling local Rademacher complexities by covering numbers. A distinguished property of our result is that it captures in an elegant form to relate local Rademacher complexities to the associated empirical local Rademacher complexities, which allows us to improve the existing local Rademacher complexity bounds for function classes with different entropy conditions in a systematic manner. We also demonstrate the effectiveness of these complexity bounds by applying them to refine the existing learning rates.

The paper is organized as follows. Section 2 formulates the problem. Section 3 provides a general local Rademacher complexity bound as well as its applications to different function classes. Section 4 applies our complexity bounds to generalization analysis. All proofs are presented in Section 5. Some conclusions are presented in Section 6.

2 Statement of the problem

We first introduce some notations which will be used throughout this paper. For a measure and a positive number , the notation means the collection of functions for which the norm is finite. For a class of functions, we use the abbreviation , and denote by

(2.1)

the class consisting of those elements which can be represented as the minus of two elements in . For a real number , indicates the least integer not less than , and represents the natural logarithm of . By we denote any quantity of a constant multiple of the involved arguments and its exact value may change from line to line, or even within the same line.

Definition 1 (Empirical measure).

Let be a set and let be points in , then the empirical measure supported on is defined as

(2.2)

where

is the characteristic function defined by

if and if .

If is a measure and is a measurable function, it is convenient [5] to use the notation . Now, for the empirical measure supported on , the empirical average of can be abbreviated as .

Definition 2 (Covering number [14]).

Let be a metric space and set . For any , a set is called an -cover of if for every we can find an element satisfying . An -cover is called a proper -cover if . The covering number is the cardinality of a minimal proper -cover of , that is

We also define the logarithm of covering number as the entropy number.

For brevity, when is a normed space with norm , we also denote by the covering number of with respect to the metric . Introduce the notation:

(2.3)
Definition 3 (Rademacher complexity [1]).

Let be a probability measure on from which the examples are independently drawn. Let

be independent Rademacher random variables that have equal probability of being

or . For a class of functions , introduce the notations:

The Rademacher complexity and empirical Rademacher complexity are defined by

In this paper we concentrate our attention on local Rademacher complexities. The word local means that the class over which the Rademacher process is defined is a subset of the original class. We consider here local Rademacher complexities of the following form:

We refer to the former as the local Rademacher complexity and the latter as the empirical local Rademacher complexity. The parameter is used to filter out those functions with large variances [25], which are of little significance in the learning process since learning algorithms are unlikely to pick them.

3 Estimating local Rademacher complexities

This section is devoted to establishing a general local Rademacher complexity bound. For this purpose, we first show how to control empirical local Rademacher complexities. The empirical radii are then connected with the true radii via the contraction property of Rademacher averages (Lemma A.4). Some examples illustrating the power of our result are also presented.

3.1 Local Rademacher complexity bounds

Mendelson [23, 24] studied by relating it with

(3.1)

the latter of which involves an empirical radius defined w.r.t. the empirical measure and can be further tackled by standard entropy integral [10], yielding a bound of the following form:

(3.2)

Although the expectation can be controlled by plus the local Rademacher complexity itself [17]

(3.3)

it is generally not trivial to control the integral in Eq. (3.2) since the random variable appears in the upper limit of the integral (the bound Eq. (3.3) can not be trivially used to control the r.h.s. of Eq. (3.2)). Mendelson’s [24, 23] idea is, under different entropy conditions, to construct different upper bounds on the involved integral for which the random variable appears in a relatively simple term. For example, for the function class satisfying , Mendelson [24] established the following bound on the integral:

(3.4)

The term turns out to be concave w.r.t. , which, together with Jensen’s inequality, can be controlled by applying the standard upper bound (3.3). Although these deductions are elegant, they do not allow for general bounds for local Rademacher complexities, and sometimes yield unsatisfactory results due to the looseness introduced by constructing an additional artificial upper bound for the integral in Eq. (3.2) (e.g., Eq. (3.4)).

We overcome these drawbacks by providing a general result on controlling local Rademacher complexity bounds. The step stone is the following lemma controlling local Rademacher complexity on a sub-class involving a random radius by a local Rademacher complexity on a sub-class involving a deterministic and adjustable parameter plus a linear function of , which allows for a direct use of the standard upper bound on and excludes the necessity of constructing non-trivial bounds for the integral in Eq. (3.2). Our basic strategy, analogous to [18, 28, 19], is to approximate the original function class with an -cover, thus relating the local Rademacher complexity of to that of two related function classes. One class is of finite cardinality and can be approached by the Massart lemma (Lemma A.1), while the other is of small magnitude and is defined by empirical radii.

Lemma 1.

Let be a function class and let be the empirical measure supported on the points , then we have the following complexity bound ( can be stochastic w.r.t. , a typical choice of is the term defined in Eq. (3.1)):

Theorem 2 (Main theorem).

Let be a function class satisfying . There holds the following inequality:

(3.5)
Remark 1.

An advantage of Theorem 2 over the existing local Rademacher complexity bounds consists in the fact that it provides a general framework for controlling local Rademacher complexities, from which, as we will show in Section 3.2, one can trivially derive explicit local Rademacher complexity bounds when the entropy information is available. Furthermore, since Theorem 2 does not involve an artificial upper bound for the integral in Eq. (3.2) (e.g., Eq. (3.4)) , it could yield sharper local Rademacher complexity bounds (see Remark 2, 3, 4) when compared to the results in [24, 23]. ∎

3.2 Some examples

We now demonstrate the effectiveness of Theorem 2 by applying it to some interesting classes satisfying general entropy conditions. Our discussion is based on the refined entropy integral (A.2), which can be used to tackle the situation where the standard entropy integral [10] diverges.

Corollary 1.

Let be a function class with . Assume that there exist three positive numbers such that for any , then for any and there holds that

Remark 2.

For function classes meeting the condition of Corollary 1, Mendelson [23, Lemma 2.3] derived the following complexity bound

(3.6)

It is interesting to compare the bound (3.6) with ours and the difference can be seen in the following three aspects:

  1. Firstly, it is obvious that the r.h.s. of Eq. (3.6) is of the same order of magnitude to . Consequently, our bound can be no worse than Eq. (3.6).

  2. Furthermore, as we will see in Section 4, the upper bound in Eq. (3.6) is not a sub-root function, which adds some additional difficulty in applying it to the generalization analysis. As a comparison, the upper bound satisfies the sub-root condition (see definition of sub-root functions in Section 4) and thus can be convenient to use in the generalization analysis.

  3. Thirdly, Eq. (3.6) is not consistent with the natural opinion on what the complexity bound should be. For example, when approaches to it is expected that the term should monotonically decrease to a limiting point. However, the upper bound in Eq. (3.6) diverges to as . As a comparison, our result does not violate such consistence since the term is always an increasing function of . ∎

Corollary 2.

Let be a function class with . Assume that there exist two constants such that

(3.7)

then we have the following complexity bound:

(3.8)

where is a constant dependent on and .

Remark 3.

We now compare Corollary 2 with the following inequality established in [24, Eq. (3.5)] under the entropy condition (3.7) with :

(3.9)

The upper bound in Eq. (3.9) is not a sub-root function. Furthermore, our bound grows monotonically increasing w.r.t. , while the bound (3.9) diverges to as , which violates the natural property the local Rademacher complexity should admit. ∎

Corollary 3.

Let be a function class with . Assume that there exist two constants such that , then we have the following complexity bound:

(3.10)
Remark 4.

As compared with the following inequality established in [24, Eq. (3.4)]

(3.11)

Corollary 3 generalizes Eq. (3.11) to the case on the one hand, and on the other hand provides a competitive result for the case . For example, when one can take in Eq. (3.10) to show that

which is no larger than Eq. (3.11) since for such . Furthermore, for the case one can also choose in Eq. (3.10) to obtain that

which is again no larger than Eq. (3.11) since in this case. Therefore, our result is competitive to Eq. (3.11) for any . ∎

4 Applications to generalization analysis

We now show how to apply the previous local Rademacher complexity bounds to study the generalization performance for learning algorithms. In the learning context, we are given an input space and an output space , along with a probability measure on . Given a sequence of examples independently drawn from , our goal is to find a prediction rule (model) to perform prediction as accurately as possible. The error incurred from using to do the prediction on an example

can be quantified by a non-negative real-valued loss function

. The generalization performance of a model can be measured by its generalization error [31, 9] . Since the measure is often unknown to us, the Empirical Risk Minimization principle firstly establishes the so-called empirical error to approximate , and then searches the prediction rule by minimizing over a specified class called hypothesis space. That is, . Denoting by the best prediction rule attained in , generalization analysis aims to relate the excess generalization error to the empirical behavior of over the sample.

Our generalization analysis is based on Theorem 3 in Bartlett et al. [2], which justifies the use of the Rademacher complexity associated with a small subset of the original class as a complexity term in an error bound. We call a function sub-root if it is nonnegative, nondecreasing and if is nonincreasing for . If is a sub-root function, then it can be checked [2, 3] that the equation has a unique positive solution , which is referred to as the fixed point of .

Lemma 3 ([2]).

Let be a class of functions taking values in and assume that there exist some functional and some constant such that for every . Let be a sub-root function with the fixed point . If for any , satisfies

then for any and any , the following inequality holds with probability at least :

(4.1)
Theorem 4.

Let be the hypothesis space and

be the shifted loss class. Suppose that is -Lipschitz, and there exist three positive constants and satisfying . Suppose the variance-expectation condition holds for functions in , i.e., there exists a constant such that . Then, for any , satisfies the following inequality with probability at least :

where is a constant depending on and .

Remark 5.

It is possible to derive generalization error bounds using the local Rademacher complexity bounds given in [24] (Eq. (3.6)) under the same entropy condition. An obstacle in the way of applying Lemma 3 is that the r.h.s. of Eq. (3.6) is not a sub-root function. The trick towards this problem is to consider the local Rademacher complexity of a slightly larger function class (the star-shaped space, or star-hull, of ), which always satisfies the sub-root property and can be related to the original class by the following inequality due to Mendelson [24, Lemma 3.9]:

With this trick and plugging Eq. (3.6) into Lemma 3, one can derive the following generalization bound with probability at least :

which is slightly worse than the bound in Theorem 4 for . Furthermore, notice that our upper bound on local Rademacher complexities is always a sub-root function, which is more convenient to use in Lemma 3 and does not require the trick of introducing an additional star-hull.

Theorem 5.

Under the same condition of Theorem 4 except the entropy condition Eq. (3.7), the following inequality holds with probability at least :

where is a constant depending on and .

Remark 6.

Since the local Rademacher complexity bound given in Eq. (3.9) is not sub-root, the application of it to study generalization performance also requires the trick of star-hull argument. Indeed, with this trick one can show that the bound (3.9) could yield the following generalization guarantee with probability at least :

which is slightly worse than the bound given in Theorem 5.

5 Proofs

5.1 Proofs on general local Rademacher complexity bounds

Proof of Lemma 1.

For a temporarily fixed , let be a minimal proper -cover of the class with respect to the metric . According to the definition of covering numbers, we know that . Furthermore, Lemma A.3 shows that . For any , let be an element of satisfying . Then, we have

(5.1)

where the last inequality is due to the inclusion relationship .

Taking , then the definition of and the fact guarantees that . Moreover, the construction of implies that

Consequently, we have

Plugging the above inequality into Eq. (5.1) gives

(5.2)

Taking conditional expectations on both sides of Eq. (5.2) and using Lemma A.1 to bound , we derive that

Since the above inequality holds for any , the desired inequality follows immediately. ∎

Proof of Theorem 2.

For any we first fix the sample . For any with , there holds that

Consequently, the following result holds almost surely

(5.3)

Using the inclusion relationship (5.3), one can control local Rademacher complexities as follows:

(5.4)

where the second inequality is a direct corollary of Lemma 1 and the last inequality follows from Eq. (2.3).

The concavity of , coupled with the Jensen inequality, implies that

(5.5)

where the second inequality follows from the standard symmetrical inequality on Rademacher average [2, e.g., Lemma A.5] and the third inequality comes from a direct application of Lemma A.4 with (with Lipschitz constant on ).

Combining Eqs. (5.4), (5.5) together, it follows directly that

Solving the above inequality (a quadratic inequality of ) gives that

The proof is complete if we take an infimum over all . ∎

5.2 Proofs on explicit local Rademacher complexity bounds

Proof of Corollary 1.

It follows directly from Theorem 2 that

(5.6)

where is defined by Eq. (2.1). Lemma A.2 and the condition on covering numbers imply that

(5.7)

Now one can resort to Lemma A.5 to address the term . Indeed, applying Lemma A.5 with the assignment and using the inequality

the following inequality holds for any :

(5.8)

where the third inequality follows from the standard result and the last inequality is due to the fact .

Letting in Eq. (5.8) and noticing Eq. (5.6), one derives that