Linear predictors form a rich class of hypotheses used in a variety of learning algorithms, including SVM (Cortes and Vapnik, 1995)
, logistic regression or conditional maximum entropy models(Berger et al., 1996)1970), and Lasso (Tibshirani, 1996).
Different regularizations or -norm conditions are used to constrain the family of linear predictors. This short note gives a sharp analysis of the generalization properties of linear predictors for arbitrary -norm upper bound constraints. To do so, we give tight upper bounds on the empirical Rademacher complexity of these hypothesis sets which we show are matched by lower bounds, modulo some constants.
The notion of Rademacher complexity is a general complexity measure used to derive sharp data-dependent learning guarantees for different hypothesis sets, including margin bounds, which are key in the analysis of generalization for classification (Koltchinskii and Panchenko, 2002; Bartlett and Mendelson, 2002; Mohri et al., 2018). There are known upper bounds on the Rademacher complexity of linear hypothesis sets for some values of , including or (Bartlett and Mendelson, 2002; Mohri et al., 2018), as well as (Kakade et al., 2008). Our upper bounds on the empirical Rademacher complexity are tighter than those known for and match the existing one for . We further give upper bounds on the Rademacher complexity for other values of (). Our upper bounds are expressed in terms of , where is the matrix whose columns are the sample points and where conjugate number associated to . We give matching lower bounds in terms of the same quantity for all values of , which suggest the key role played by this quantity in the analysis of complexity.
Much of the results presented here already appeared in (Awasthi et al., 2020), in the context of the analysis of adversarial Rademacher complexity. Here, we present a more self-contained and detailed analysis, including the statement and proof of lower bounds. In Section 2, we introduce some preliminary definitions and notation. We present our new upper and lower bounds on the Rademacher complexity of linear hypothesis sets in Section 3 (Theorem 3 and Theorem 3). The proof of the upper bounds is given in Appendix A and that of the lower bounds in Appendix B. Lastly, in Appendix D we give a detailed analysis of how our bounds improve upon existing ones.
We will denote vectors as lowercase bold letters (e.g., ) and matrices as uppercase bold (e.g., ). The all-ones vector is denote by . The Hölder conjugate of is denoted by . For a matrix , the -group norm is defined as the -norm of the -norm of the columns of , that is , where s are the columns of .
Let be a family of functions mapping from to . Then, the empirical Rademacher complexity of for a sample , is defined by
is a vector of i.i.d. Rademacher variables, that is independent uniform random variables taking values in. The Rademacher complexity of , , is defined as the expectation of this quantity: , where is a distribution over the input space . The empirical Rademacher complexity is a key data-dependent complexity measure. For a family of functions taking values in , the following learning guarantee holds: for any
, with probability at leastover the draw of a sample , the following inequality holds for all (Mohri et al., 2018):
where we denote by the empirical average of , that is . A similar inequality holds for the average Rademacher complexity :
An important application of these bounds is the derivation of margin bounds which are crucial in the analysis of classification. Fix . Then, for any , with probability at least over the draw of a sample , the following inequality holds for all (Koltchinskii and Panchenko, 2002; Mohri et al., 2018):
Finer margin guarantees were recently presented by Cortes et al. (2020) in terms of Rademacher complexity and other complexity measures. Furthermore, the Rademacher complexity of a hypothesis set also appears as a lower bound in generalization. As an example, for a symmetric family of functions taking values in , the following holds (van der Vaart and Wellner, 1996):
The hypothesis set we will analyze in this paper is that of linear predictors whose weight vector is bounded in -norm:
3 Empirical Rademacher Complexity of Linear Hypothesis Sets
The main results of this note are the following upper and lower bounds on the empirical Rademacher complexity of linear hypothesis sets.  Let be a family of linear functions defined over with bounded weight in -norm. Then, the empirical Rademacher complexity of for a sample admits the following upper bounds:
where is the -matrix with s as columns: . Furthermore, the constant factor in the inequality for the case can be bounded as follows:
The proof is given in Appendix A. Both the statement of the theorem and its proof first appeared in (Awasthi et al., 2020) in the context of the analysis of adversarial Rademacher complexity. We present a self-contained analysis in this note to make the results more easily accessible, as we believe these results are of a wider interest. The next theorem is new and provides a lower bound for which, modulo a constant factor, matches the upper bounds stated above.
 Let be a family of linear functions defined over with bounded weight in -norm. Then, the empirical Rademacher complexity of for a sample admits the following lower bound, where :
This lower bound is in tight in terms of dependence on sample size and dimension . The proof is given in Appendix B. The following corollary presents somewhat looser upper bounds that may be more convenient in various contexts, such as that of kernel-based hypothesis sets. The corollary can be derived directly by combining Theorem 3 and Proposition 1 (see Section 3.2).
be a family of linear functions defined over with
bounded weight in -norm. Then, the empirical Rademacher
complexity of for a sample
admits the following
upper bounds, where :
We now make a few remarks about Theorem 3 and present the proof in Appendix A. The theorem states that for any data set, is a constant times . This is in contrast to the quantity that appears in the existing analysis available in the literature for linear hypothesis sets (Kakade et al., 2008). However, as we will soon see in Theorem 2 using always leads to a better upper bound.
Another interesting aspect of the upper bound is the dimension dependence of the constant in front of . This constant is independent of dimension only for . For , the dependence on dimension is tight, which can be seen from the correspondence tightness of the maximal inequality and thus that of Massart’s inequality (Boucheron et al., 2013). We also provide a simple example further illustrating this dependence in Appendix E. This observation also explains why the constant for approaches infinity as : if we had that
for , then by continuity
If were dimension independent and were finite, then the constant for would be finite and dimension independent as well. Since we just showed that the constant for must have dimension dependence, we must have that . This observation suggests that finding dimension-dependent constant for could greatly improve the upper bound of Theorem 3. However, our example where the dimension dependence was tight for had , which is unrealistic for most applications. It’s possible that with some reasonable assumption on the relationship between and , one could find a far better constant for .
3.2 Comparison with Previous Work
We are not aware of any existing bound for the empirical Rademacher complexity of linear hypothesis sets for before this work. For other values of , the best existing upper bounds were given by Kakade et al. (2008) for and by Bartlett and Mendelson (2001) (see also (Mohri et al., 2018)) for :
Our new upper bound coincides with (4) when and is strictly tighter otherwise. Readers familiar with Rademacher complexity bounds for linear hypothesis sets will notice that our bound in this case depends on the norm . In contrast, the previously known bounds depend on . In fact, one can show that the is always smaller than for , that is , as shown by the last inequality of (5) in the following proposition.
 Let be a matrix. If , then
If , then
For convenience, in the discussion below, we set and . Regarding the growth of the constant in our bound, Theorem 3 implies that as , grows asymptotically like . Furthermore, in the relevant region (See Appendix A.3). In Figure 2 we plot and the bounds on to illustrate the growth rate of these constants with .
Proposition 1 and the inequality imply the following result.
We presented tight bounds on the empirical Rademacher complexity of linear hypothesis sets constrained by an -norm bound on the weight vector. These bounds can be used to derive sharp generalization guarantees for these hypothesis sets in a variety of different contexts, by plugging them in existing Rademacher complexity learning bounds. Our proofs and guarantees suggest an extension beyond -norm constrained hypothesis sets that we will discuss elsewhere.
- Alzer (1997) Horst Alzer. On some inequalities for the Gamma and Psi functions. Math. Comput., 66(217):373–389, 1997.
Awasthi et al. (2020)
Pranjal Awasthi, Natalie Frank, and Mehryar Mohri.
Adversarial learning guarantees for linear hypotheses and neural networks.In Proceedings of ICML, 2020.
- Bartlett and Mendelson (2001) Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. In Proceedings of COLT, 2001.
Bartlett and Mendelson (2002)
Peter L. Bartlett and Shahar Mendelson.
Rademacher and Gaussian complexities: Risk bounds and structural
Journal of Machine Learning Research, 3, 2002.
Berger et al. (1996)
Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra.
A maximum entropy approach to natural language processing.Comp. Linguistics, 22(1), 1996.
- Boucheron et al. (2013) Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities - A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
- Cortes and Vapnik (1995) Corinna Cortes and Vladimir Vapnik. Support-vector networks. Mach. Learn., 20(3):273–297, 1995.
- Cortes et al. (2020) Corinna Cortes, Mehryar Mohri, and Ananda Theertha Suresh. Relative deviation margin bounds. CoRR, abs/2006.14950, 2020.
- Haagerup (1981) Uffe Haagerup. The best constants in the Khintchine inequality. Studia Mathematica, 70:231–283, 1981.
Hoerl and Kennard (1970)
Arthur E. Hoerl and Robert W. Kennard.
Ridge regression: Biased estimation for nonorthogonal problems.Technometrics, 12(1):55–67, 1970.
- Kakade et al. (2008) Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Proceedings of NIPS, pages 793–800, 2008.
Koltchinskii and Panchenko (2002)
Vladmir Koltchinskii and Dmitry Panchenko.
Empirical margin distributions and bounding the generalization error of combined classifiers.Annals of Statistics, 30, 2002.
- Massart (2000) Pascal Massart. Some applications of concentration inequalities to statistics. Annales de la Faculté des Sciences de Toulouse, IX:245–303, 2000.
- Mohri et al. (2018) Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. The MIT Press, second edition, 2018.
- Olver et al. (2010) Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert, and Charles W. Clark. The NIST Handbook of Mathematical Functions. Cambridge Univ. Press, 2010.
- Tibshirani (1996) Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58(1):267–288, 1996.
- van der Vaart and Wellner (1996) Aad W. van der Vaart and Jon A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996.
Appendix A Proof of Theorem 3
In this section, we present the proof of Theorem 3.
See 3 The proof proceeds in several steps. First, in Appendix A.1 we upper bound the Rademacher complexity of . Next, in Appendix A.2, we establish the upper bound for . Lastly, in Appendix A.3, we prove the inequalities for the constant terms in the case .
a.1 Proof of the upper bound, case
The bound on the Rademacher complexity for was previously known but we reproduce the proof of this theorem for completeness. We closely follow the proof given in (Mohri et al., 2018). For any , denotes the th component of .
|(by definition of the dual norm)|
|(by definition of )|
|(by definition of )|
which concludes the proof.
a.2 Proof of upper bound, case
Here again, we use the shorthand . By definition of the dual norm, we can write:
Next, by Khintchine’s inequality (Haagerup, 1981), the following holds:
where for and
for . This yields the following bound on the Rademacher complexity:
a.3 Bounding the Constant
For convenience, set . We establish upper and lower bound on . Let . Then the following inequalities hold:
For convenience, we set , , . Next, we recall a useful inequality (Olver et al., 2010) bounding the gamma function:
We start with the upper bound. If we apply the right-hand side inequality of (7) to we get the following bound on :
It is easy to verify that,
Furthermore, the expression decreases with increasing . At , it is negative, which implies that (9) is less than 1 for . Hence
Next, we prove the lower bound. Applying the lower bound of (7) to results in
We will establish that , which will complete the proof of the lower bound. We prove this statement by showing that
By applying some elementary inequalities
The last inequality follows since increases with , and is positive at .
Appendix B Proof of Theorem 3
In this section, we prove the lower bound of Theorem 3.
For any vector , let denote the vector derived from by taking the absolute value of each of its components. Starting as in the proof of Theorem 3, using the dual norm property, we can write:
Appendix C Proof of Proposition 1
In this section, we prove Proposition 1. This result implies that for , the group norm , is always a lower bound on the term that appears in existing upper bounds. We first present a simple lemma helpful for the proof.
Let and be dimension. Then
We prove that, if , then the following equality holds:
and otherwise that the following holds:
If , by Hölder’s generalized inequality with ,
Note that equality holds at the vector , and this implies that the inequality in the line above is an equality. Now for , , implying that . Here, equality is achieved at a unit vector .
We now present the proof of Proposition 1.
which implies that
However, now and are swapped in comparison to (6). Now after swapping them again, for ,
The rest of this proof will be devoted to showing (5).
Next, if , then . For the rest of the proof, we will assume that . Specifically, which allows us to consider fractions like .
We will show that for , the following inequality holds: , or equivalently, .
We will use the shorthand . By definition of the group norm and using the notation , we can write
To show that this inequality is tight, note that equality holds for an all-ones matrix. Next, we prove the inequality
for . Applying Lemma C twice gives
Again applying Lemma C twice gives
Appendix D Proof of Theorem 2
Both Theorem 3 and equation (4) present upper bounds on for . Both of these bounds are of the form a constant times a matrix norm of . In Appendix C, we compared the two matrix norms and proved the inequality in the relevant region (Lemma C). Here, we compare the two constants and show that the constant associated with Theorem 3 is smaller than the one appearing in (4) (Lemma D). These lemmas combined directly prove Theorem 2.
In this section, we study the constants in the two known bounds on the Rademacher complexity of linear classes for . Specifically,
Here we establish our main claim that . Let and . Then
for all . First note that . For convenience, set , , and . We claim for , and this implies that for .
The rest of this proof is devoted to showing that . Upon differentiating we get that . Next, we will differentiate . To start, we state a useful inequality (see Equation in Alzer (1997)) bounding the digamma function, .
Recall that the digamma function is the logarithmic derivative of the gamma function, . Now we differentiate :
|(by the left-hand equality in (7))|
The last line follows since we only consider and in this range. Finally, the fact that implies
Appendix E The Tightness of the factor for
Here, we provide an example showing that the dimension dependence of in our upper bound on the Rademacher complexity of linear functions bounded in norm is tight.
Consider a data set with . Then the data matrix has rows. We pick the data so that the rows of are the set . This means that and we can compute the Rademacher complexity as
|(definition of dual norm)|
|(tightness of Cauchy-Schwartz)|