# Local Rademacher Complexity Bounds based on Covering Numbers

This paper provides a general result on controlling local Rademacher complexities, which captures in an elegant form to relate the complexities with constraint on the expected norm to the corresponding ones with constraint on the empirical norm. This result is convenient to apply in real applications and could yield refined local Rademacher complexity bounds for function classes satisfying general entropy conditions. We demonstrate the power of our complexity bounds by applying them to derive effective generalization error bounds.

## Authors

• 15 publications
• 2 publications
• 1 publication
• ### Generalization Bounds for Metric and Similarity Learning

Recently, metric learning and similarity learning have attracted a large...
07/23/2012 ∙ by Qiong Cao, et al. ∙ 0

• ### Almost Global Problems in the LOCAL Model

The landscape of the distributed time complexity is nowadays well-unders...
05/12/2018 ∙ by Alkida Balliu, et al. ∙ 0

• ### Error Bounds for Piecewise Smooth and Switching Regression

The paper deals with regression problems, in which the nonsmooth target ...
07/25/2017 ∙ by Fabien Lauer, et al. ∙ 0

• ### Localized Complexities for Transductive Learning

We show two novel concentration inequalities for suprema of empirical pr...
11/26/2014 ∙ by Ilya Tolstikhin, et al. ∙ 0

• ### Relative Deviation Margin Bounds

We present a series of new and more favorable margin-based learning guar...
06/26/2020 ∙ by Corinna Cortes, et al. ∙ 7

• ### Improved Generalisation Bounds for Deep Learning Through L^∞ Covering Numbers

Using proof techniques involving L^∞ covering numbers, we show generalis...
05/29/2019 ∙ by Antoine Ledent, et al. ∙ 0

• ### Relax and Localize: From Value to Algorithms

We show a principled way of deriving online learning algorithms from a m...
04/04/2012 ∙ by Alexander Rakhlin, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Machine learning refers to a process of inferring the underlying relationship among input-output variables from a previously chosen hypothesis class , on the basis of some scattered, noisy examples [11, 29]. Generalization analysis on learning algorithms stands a central place in machine learning since it is important to understand the factors influencing models’ behavior, as well as to suggest ways to improve them [5, 2, 7, 6, 3, 20]. One seminar example can be found in the multiple kernel learning (MKL) context, where Cortes et al. [7] established a framework showing how the generalization analysis in [12, 25, 13] could motivate two novel MKL algorithms.

Vapnik and Chervonenkis [30] pioneered the research on learning theory by relating generalization errors to the supremum of an empirical process: , where is the associated loss class induced from the hypothesis space, and

are the true probability measure and the empirical probability measure, respectively. It was then indicated that this supremum is closely connected with the “size” of the space

[29, 30]. For a finite class of functions, its size can be simply measured by its cardinality.  Vapnik [29] provided a novel concept called VC dimension to characterize the complexity of -valued function classes, by noticing that the quantity of significance is the number of points acquired when projecting the function class onto the sample. Other quantities like covering numbers, which measure the number of balls required to cover the original class, have been introduced to capture, on a finer scale, the “size” of real-valued function classes [33, 8, 34, 14]

. With the recent development in concentration inequalities and empirical process theory, it is possible to obtain a slightly tighter estimate on the “size” of

through the remarkable concept called Rademacher complexity [1, 2, 32, 15].

However, all the above mentioned approaches provide only global estimates on the complexity of function classes, and they do not reflect how a learning algorithm explores the function class and interacts with the examples [5, 4]. Moreover, they are bound to control the deviation of empirical errors from the true errors simultaneously over the whole class, while the quantity of primary importance is only that deviation for the particular function picked by the learning algorithm, which may be far from reaching this supremum [2, 16, 26]. Therefore, the analysis based on a global complexity would give a rather conservative estimate. On the other hand, most learning algorithms are inclined towards choosing functions possessing small empirical errors and hopefully also small generalization errors [5]

. Furthermore, if there holds a relationship between variances and expectations like

, these functions will also admit small variances. That is to say, the obtained prediction rule is likely to fall into a subclass with small variances [2]. Due to the seminar work of Koltchinskii and Panchenko [16] and Massart [22], it turns out that the notion of Rademacher complexity can be naturally modified to take this into account, yielding the so-called local Rademacher complexity [16]. Since local Rademacher complexity is always smaller than the global counterpart, the discussion based on local Rademacher complexities always yields significantly better learning rates under the variance-expectation conditions.

Mendelson [24, 23] initiated the discussion of estimating local Rademacher complexities with covering numbers and these complexity bounds are very effective in establishing fast learning rates. However, the discussions in [24, 23] are somewhat dispersed in the sense that the author did not provided a general result applicable to all function classes. Indeed, Mendelson [24, 23] derived local Rademacher complexity bounds for several function classes satisfying different entropy conditions case-by-case, and the involved deduction also relies on the specific entropy conditions. Mendelson [25] also derived, for a general Reproducing Kernel Hilbert Space

(RKHS), an interesting local Rademacher complexity bound based on the eigenvalues of the associated integral operator, which was later generalized to

-norm MKL context [12, 13, 21]. These results are exclusively developed for RKHSs and it still remains unknown whether they could be extended to general function classes. In this paper, we try to refine these discussions by providing some general and sharp results on controlling local Rademacher complexities by covering numbers. A distinguished property of our result is that it captures in an elegant form to relate local Rademacher complexities to the associated empirical local Rademacher complexities, which allows us to improve the existing local Rademacher complexity bounds for function classes with different entropy conditions in a systematic manner. We also demonstrate the effectiveness of these complexity bounds by applying them to refine the existing learning rates.

The paper is organized as follows. Section 2 formulates the problem. Section 3 provides a general local Rademacher complexity bound as well as its applications to different function classes. Section 4 applies our complexity bounds to generalization analysis. All proofs are presented in Section 5. Some conclusions are presented in Section 6.

## 2 Statement of the problem

We first introduce some notations which will be used throughout this paper. For a measure and a positive number , the notation means the collection of functions for which the norm is finite. For a class of functions, we use the abbreviation , and denote by

 ˜F:={f−g:f,g∈F} (2.1)

the class consisting of those elements which can be represented as the minus of two elements in . For a real number , indicates the least integer not less than , and represents the natural logarithm of . By we denote any quantity of a constant multiple of the involved arguments and its exact value may change from line to line, or even within the same line.

###### Definition 1 (Empirical measure).

Let be a set and let be points in , then the empirical measure supported on is defined as

 Pn(A):=1nn∑i=1χA(si),for any A⊂S, (2.2)

where

is the characteristic function defined by

if and if .

If is a measure and is a measurable function, it is convenient [5] to use the notation . Now, for the empirical measure supported on , the empirical average of can be abbreviated as .

###### Definition 2 (Covering number [14]).

Let be a metric space and set . For any , a set is called an -cover of if for every we can find an element satisfying . An -cover is called a proper -cover if . The covering number is the cardinality of a minimal proper -cover of , that is

 N(ϵ,F,d):=min{|F△|:F△⊆F is an ϵ-cover of F}.

We also define the logarithm of covering number as the entropy number.

For brevity, when is a normed space with norm , we also denote by the covering number of with respect to the metric . Introduce the notation:

 N(ϵ,F,∥⋅∥p):=supnsupPnN(ϵ,F,∥⋅∥Lp(Pn)). (2.3)
###### Definition 3 (Rademacher complexity [1]).

Let be a probability measure on from which the examples are independently drawn. Let

be independent Rademacher random variables that have equal probability of being

or . For a class of functions , introduce the notations:

 Rnf=1nn∑i=1σif(Xi),RnF=supf∈FRnf.

In this paper we concentrate our attention on local Rademacher complexities. The word local means that the class over which the Rademacher process is defined is a subset of the original class. We consider here local Rademacher complexities of the following form:

 ERn{f∈F:Pf2≤r}orEσRn{f∈F:Pnf2≤r}.

We refer to the former as the local Rademacher complexity and the latter as the empirical local Rademacher complexity. The parameter is used to filter out those functions with large variances [25], which are of little significance in the learning process since learning algorithms are unlikely to pick them.

## 3 Estimating local Rademacher complexities

This section is devoted to establishing a general local Rademacher complexity bound. For this purpose, we first show how to control empirical local Rademacher complexities. The empirical radii are then connected with the true radii via the contraction property of Rademacher averages (Lemma A.4). Some examples illustrating the power of our result are also presented.

### 3.1 Local Rademacher complexity bounds

Mendelson [23, 24] studied by relating it with

 ERn{f∈F:Pnf2≤^r},^r:=supf∈F:Pf2≤rPnf2, (3.1)

the latter of which involves an empirical radius defined w.r.t. the empirical measure and can be further tackled by standard entropy integral [10], yielding a bound of the following form:

 ERn{f∈F:Pf2≤r}≤c⋅E∫^r0log12N(ϵ,F,∥⋅∥L2(Pn))dϵ. (3.2)

Although the expectation can be controlled by plus the local Rademacher complexity itself [17]

 E√^r≤r+4supf∈F∥f∥∞ERn{f∈F:Pf2≤r}, (3.3)

it is generally not trivial to control the integral in Eq. (3.2) since the random variable appears in the upper limit of the integral (the bound Eq. (3.3) can not be trivially used to control the r.h.s. of Eq. (3.2)). Mendelson’s [24, 23] idea is, under different entropy conditions, to construct different upper bounds on the involved integral for which the random variable appears in a relatively simple term. For example, for the function class satisfying , Mendelson [24] established the following bound on the integral:

 E∫^r0log12N(ϵ,F,∥⋅∥L2(Pn))dϵ≤E∫√^r0logp2γϵdϵ≤2E[√^rlogp2c(p,γ)√^r]. (3.4)

The term turns out to be concave w.r.t. , which, together with Jensen’s inequality, can be controlled by applying the standard upper bound (3.3). Although these deductions are elegant, they do not allow for general bounds for local Rademacher complexities, and sometimes yield unsatisfactory results due to the looseness introduced by constructing an additional artificial upper bound for the integral in Eq. (3.2) (e.g., Eq. (3.4)).

We overcome these drawbacks by providing a general result on controlling local Rademacher complexity bounds. The step stone is the following lemma controlling local Rademacher complexity on a sub-class involving a random radius by a local Rademacher complexity on a sub-class involving a deterministic and adjustable parameter plus a linear function of , which allows for a direct use of the standard upper bound on and excludes the necessity of constructing non-trivial bounds for the integral in Eq. (3.2). Our basic strategy, analogous to [18, 28, 19], is to approximate the original function class with an -cover, thus relating the local Rademacher complexity of to that of two related function classes. One class is of finite cardinality and can be approached by the Massart lemma (Lemma A.1), while the other is of small magnitude and is defined by empirical radii.

###### Lemma 1.

Let be a function class and let be the empirical measure supported on the points , then we have the following complexity bound ( can be stochastic w.r.t. , a typical choice of is the term defined in Eq. (3.1)):

 EσRn{f∈F:Pnf2≤r}≤infϵ>0⎡⎢⎣EσRn{f∈˜F:Pnf2≤ϵ2}+√2rlogN(ϵ/2,F,∥⋅∥L2(Pn))n⎤⎥⎦.
###### Theorem 2 (Main theorem).

Let be a function class satisfying . There holds the following inequality:

 (3.5)
###### Remark 1.

An advantage of Theorem 2 over the existing local Rademacher complexity bounds consists in the fact that it provides a general framework for controlling local Rademacher complexities, from which, as we will show in Section 3.2, one can trivially derive explicit local Rademacher complexity bounds when the entropy information is available. Furthermore, since Theorem 2 does not involve an artificial upper bound for the integral in Eq. (3.2) (e.g., Eq. (3.4)) , it could yield sharper local Rademacher complexity bounds (see Remark 2, 3, 4) when compared to the results in [24, 23]. ∎

### 3.2 Some examples

We now demonstrate the effectiveness of Theorem 2 by applying it to some interesting classes satisfying general entropy conditions. Our discussion is based on the refined entropy integral (A.2), which can be used to tackle the situation where the standard entropy integral [10] diverges.

###### Corollary 1.

Let be a function class with . Assume that there exist three positive numbers such that for any , then for any and there holds that

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)min[(√drlogp(2γr−1/2)n+dlogp(2γr−1/2)n),(dlogp(2γn1/2)n+√rdlogp(2γn1/2)n)].
###### Remark 2.

For function classes meeting the condition of Corollary 1, Mendelson [23, Lemma 2.3] derived the following complexity bound

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)max[dnlogp1√r,√drnlogp/21√r]. (3.6)

It is interesting to compare the bound (3.6) with ours and the difference can be seen in the following three aspects:

1. Firstly, it is obvious that the r.h.s. of Eq. (3.6) is of the same order of magnitude to . Consequently, our bound can be no worse than Eq. (3.6).

2. Furthermore, as we will see in Section 4, the upper bound in Eq. (3.6) is not a sub-root function, which adds some additional difficulty in applying it to the generalization analysis. As a comparison, the upper bound satisfies the sub-root condition (see definition of sub-root functions in Section 4) and thus can be convenient to use in the generalization analysis.

3. Thirdly, Eq. (3.6) is not consistent with the natural opinion on what the complexity bound should be. For example, when approaches to it is expected that the term should monotonically decrease to a limiting point. However, the upper bound in Eq. (3.6) diverges to as . As a comparison, our result does not violate such consistence since the term is always an increasing function of . ∎

###### Corollary 2.

Let be a function class with . Assume that there exist two constants such that

 logN(ϵ,F,∥⋅∥2)≤γϵ−plog22ϵ, (3.7)

then we have the following complexity bound:

 ERn{f∈F:Pf2≤r}≤⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩cinfϵ>0[n−1/2ϵ1−p/2log1ϵ+ϵ−pn−1log24ϵ+√rϵ−pn−1log24ϵ]if 02, (3.8)

where is a constant dependent on and .

###### Remark 3.

We now compare Corollary 2 with the following inequality established in [24, Eq. (3.5)] under the entropy condition (3.7) with :

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)(n−2/(p+2)log42+p2r+n−1/2r(2−p)/4log2r),0

The upper bound in Eq. (3.9) is not a sub-root function. Furthermore, our bound grows monotonically increasing w.r.t. , while the bound (3.9) diverges to as , which violates the natural property the local Rademacher complexity should admit. ∎

###### Corollary 3.

Let be a function class with . Assume that there exist two constants such that , then we have the following complexity bound:

 ERn{f∈F:Pf2≤r}≤⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩c(b,p,γ)infϵ>0[n−1/2ϵ1−p/2+ϵ−pn−1+√rϵ−pn−1]if 02. (3.10)
###### Remark 4.

As compared with the following inequality established in [24, Eq. (3.4)]

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)(n−2/(p+2)+n−1/2r(2−p)/4),0

Corollary 3 generalizes Eq. (3.11) to the case on the one hand, and on the other hand provides a competitive result for the case . For example, when one can take in Eq. (3.10) to show that

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)[n−2/(p+2)+√rn−1/(p+2)],

which is no larger than Eq. (3.11) since for such . Furthermore, for the case one can also choose in Eq. (3.10) to obtain that

 ERn{f∈F:Pf2≤r}≤c(b,p,γ)[n−1/2r(2−p)/4+r−p/2n−1],

which is again no larger than Eq. (3.11) since in this case. Therefore, our result is competitive to Eq. (3.11) for any . ∎

## 4 Applications to generalization analysis

We now show how to apply the previous local Rademacher complexity bounds to study the generalization performance for learning algorithms. In the learning context, we are given an input space and an output space , along with a probability measure on . Given a sequence of examples independently drawn from , our goal is to find a prediction rule (model) to perform prediction as accurately as possible. The error incurred from using to do the prediction on an example

can be quantified by a non-negative real-valued loss function

. The generalization performance of a model can be measured by its generalization error [31, 9] . Since the measure is often unknown to us, the Empirical Risk Minimization principle firstly establishes the so-called empirical error to approximate , and then searches the prediction rule by minimizing over a specified class called hypothesis space. That is, . Denoting by the best prediction rule attained in , generalization analysis aims to relate the excess generalization error to the empirical behavior of over the sample.

Our generalization analysis is based on Theorem 3 in Bartlett et al. [2], which justifies the use of the Rademacher complexity associated with a small subset of the original class as a complexity term in an error bound. We call a function sub-root if it is nonnegative, nondecreasing and if is nonincreasing for . If is a sub-root function, then it can be checked [2, 3] that the equation has a unique positive solution , which is referred to as the fixed point of .

###### Lemma 3 ([2]).

Let be a class of functions taking values in and assume that there exist some functional and some constant such that for every . Let be a sub-root function with the fixed point . If for any , satisfies

 ψ(r)≥BERn{f∈F:T(f)≤r},

then for any and any , the following inequality holds with probability at least :

 Pf≤KK−1Pnf+704KBr∗+t(11(b−a)+26BK)n,∀f∈F. (4.1)
###### Theorem 4.

Let be the hypothesis space and

 F:={Z=(X,Y)→ℓ(h(X),Y)−ℓ(h∗(X),Y):h∈H}

be the shifted loss class. Suppose that is -Lipschitz, and there exist three positive constants and satisfying . Suppose the variance-expectation condition holds for functions in , i.e., there exists a constant such that . Then, for any , satisfies the following inequality with probability at least :

 E(^hn)−E(h∗)≤c[dlogpnn+log(1/δ)n],

where is a constant depending on and .

###### Remark 5.

It is possible to derive generalization error bounds using the local Rademacher complexity bounds given in [24] (Eq. (3.6)) under the same entropy condition. An obstacle in the way of applying Lemma 3 is that the r.h.s. of Eq. (3.6) is not a sub-root function. The trick towards this problem is to consider the local Rademacher complexity of a slightly larger function class (the star-shaped space, or star-hull, of ), which always satisfies the sub-root property and can be related to the original class by the following inequality due to Mendelson [24, Lemma 3.9]:

 logN(2ϵ,star(F),∥⋅∥2)≤log2ϵ+logN(ϵ,F,∥⋅∥2).

With this trick and plugging Eq. (3.6) into Lemma 3, one can derive the following generalization bound with probability at least :

 E(^hn)−E(h∗)≤c[dlogmax(1,p)nn+log(1/δ)n],

which is slightly worse than the bound in Theorem 4 for . Furthermore, notice that our upper bound on local Rademacher complexities is always a sub-root function, which is more convenient to use in Lemma 3 and does not require the trick of introducing an additional star-hull.

###### Theorem 5.

Under the same condition of Theorem 4 except the entropy condition Eq. (3.7), the following inequality holds with probability at least :

 E(^hn)−E(h∗)≤c(n−pp+2(logn)2−pp+2logn(logn)2p+2+n−1log(1/δ)),

where is a constant depending on and .

###### Remark 6.

Since the local Rademacher complexity bound given in Eq. (3.9) is not sub-root, the application of it to study generalization performance also requires the trick of star-hull argument. Indeed, with this trick one can show that the bound (3.9) could yield the following generalization guarantee with probability at least :

 E(^hn)−E(h∗)≤c(n−pp+2(logn)4p+2+n−1log(1/δ)),

which is slightly worse than the bound given in Theorem 5.

## 5 Proofs

### 5.1 Proofs on general local Rademacher complexity bounds

###### Proof of Lemma 1.

For a temporarily fixed , let be a minimal proper -cover of the class with respect to the metric . According to the definition of covering numbers, we know that . Furthermore, Lemma A.3 shows that . For any , let be an element of satisfying . Then, we have

 Rn{f∈F:Pnf2≤r}=sup{f∈F:Pnf2≤r}[1nn∑i=1σif(Xi)−1nn∑i=1σif△(Xi)+1nn∑i=1σif△(Xi)]≤sup{f∈F:Pnf2≤r}1nn∑i=1σi[f(Xi)−f△(Xi)]+sup{f∈F:Pnf2≤r}1nn∑i=1σif△(Xi)≤sup{f∈F:Pnf2≤r}1nn∑i=1σi[f(Xi)−f△(Xi)]+sup{f∈F△:Pnf2≤r}1nn∑i=1σif(Xi), (5.1)

where the last inequality is due to the inclusion relationship .

Taking , then the definition of and the fact guarantees that . Moreover, the construction of implies that

 Png2=1nn∑i=1(f−f△)2(Xi)≤ϵ2.

Consequently, we have

 sup{f∈F:Pnf2≤r}1nn∑i=1σi[f(Xi)−f△(Xi)]

Plugging the above inequality into Eq. (5.1) gives

 Rn{f∈F:Pnf2≤r}≤Rn{f∈˜F:Pnf2≤ϵ2}+Rn{f∈F△:Pnf2≤r}. (5.2)

Taking conditional expectations on both sides of Eq. (5.2) and using Lemma A.1 to bound , we derive that

 EσRn{f∈F:Pnf2≤r} ≤EσRn{f∈˜F:Pnf2≤ϵ2}+√2rlogN(ϵ/2,F,∥⋅∥L2(Pn))n.

Since the above inequality holds for any , the desired inequality follows immediately. ∎

###### Proof of Theorem 2.

For any we first fix the sample . For any with , there holds that

 Pnf2≤sup{f∈F:Pf2≤r}(Pnf2−Pf2)+Pf2≤sup{f∈F:Pf2≤r}(Pnf2−Pf2)+r.

Consequently, the following result holds almost surely

 {f∈F:Pf2≤r}⊆{f∈F:Pnf2≤sup{f∈F:Pf2≤r}(Pnf2−Pf2)+r}. (5.3)

Using the inclusion relationship (5.3), one can control local Rademacher complexities as follows:

 ERn{f∈F:Pf2≤r}=EEσRn{f∈F:Pf2≤r}≤EEσRn{f∈F:Pnf2≤r+sup{f∈F:Pf2≤r}(Pnf2−Pf2)}≤ERn{f∈˜F:Pnf2≤ϵ2}+√2nE√((r+sup{f∈F:Pf2≤r}(Pnf2−Pf2))logN(ϵ/2,F,∥⋅∥L2(Pn))≤ERn{f∈˜F:Pnf2≤ϵ2}+√2logN(ϵ/2,F,∥⋅∥2)nE√r+sup{f∈F:Pf2≤r}(Pnf2−Pf2), (5.4)

where the second inequality is a direct corollary of Lemma 1 and the last inequality follows from Eq. (2.3).

The concavity of , coupled with the Jensen inequality, implies that

 E√r+sup{f∈F:Pf2≤r}(Pnf2−Pf2)≤√r+Esup{f∈F:Pf2≤r}(Pnf2−Pf2)≤√r+2ERn{f2:f∈F,Pf2≤r}≤√r+4bERn{f∈F:Pf2≤r}, (5.5)

where the second inequality follows from the standard symmetrical inequality on Rademacher average [2, e.g., Lemma A.5] and the third inequality comes from a direct application of Lemma A.4 with (with Lipschitz constant on ).

Combining Eqs. (5.4), (5.5) together, it follows directly that

 ERn{f∈F:Pf2≤r}≤ERn{f∈˜F:Pnf2≤ϵ2}+√2logN(ϵ/2,F,∥⋅∥2)n√r+4bERn{f∈F:Pf2≤r}.

Solving the above inequality (a quadratic inequality of ) gives that

 ERn{f∈F:Pf2≤r}≤2ERn{f∈˜F:Pnf2≤ϵ2}+8blogN(ϵ/2,F,∥⋅∥2)n+√2rlogN(ϵ/2,F,∥⋅∥2)n.

The proof is complete if we take an infimum over all . ∎

### 5.2 Proofs on explicit local Rademacher complexity bounds

###### Proof of Corollary 1.

It follows directly from Theorem 2 that

 ERn{f∈F:Pf2≤r}≤inf0<ϵ≤2γ⎡⎣2ERn{f∈˜F:Pnf2≤ϵ2}+8bdlogp(2γ/ϵ)n+√2rdlogp(2γ/ϵ)n⎤⎦, (5.6)

where is defined by Eq. (2.1). Lemma A.2 and the condition on covering numbers imply that

 logN(ϵ,˜F,∥⋅∥2)≤2logN(ϵ/2,F,∥⋅∥2)≤2dlogp(2γ/ϵ),for any 0<ϵ≤2γ. (5.7)

Now one can resort to Lemma A.5 to address the term . Indeed, applying Lemma A.5 with the assignment and using the inequality

 N(ϵk,{f∈˜F:Pnf2≤ϵ2},∥⋅∥L2(Pn))≤N(ϵk/2,˜F,∥⋅∥L2(Pn)),

the following inequality holds for any :

 ERn{f∈˜F:Pnf2≤ϵ2}=EEσRn{f∈˜F:Pnf2≤ϵ2}≤4EN∑k=1ϵk−1√logN(ϵk/2,˜F,∥⋅∥L2(Pn))n+ϵN≤27/2√dnϵN∑k=12−klogp/2(2k+2γϵ−1)+ϵN(according% to Eq.~{}(???))≤2(7+p)/2√dnϵN∑k=12−k[((k+1)log2)p/2+logp/2(2γϵ−1)]+ϵN≤2(7+p)/2√dnϵ[c(p)+logp/2(2γ/ϵ)]+ϵN, (5.8)

where the third inequality follows from the standard result and the last inequality is due to the fact .

Letting in Eq. (5.8) and noticing Eq. (5.6), one derives that

 ERn{f∈F:Pf2≤r}≤inf0<ϵ≤2γ⎡⎣2(9+p)/2√dnϵ(c(p)+logp/2(2γ/ϵ))+8bdlogp(2γ/ϵ)n+