Despite excellent practical performances, state of the art machine learning (ML) methods often require huge computational resources, motivating the search for more efficient solutions.
This has led to a number of new results in optimization
Despite excellent practical performances, state of the art machine learning (ML) methods often require huge computational resources, motivating the search for more efficient solutions. This has led to a number of new results in optimization[22, 41], as well as the development of approaches mixing linear algebra and randomized algorithms [30, 18, 56, 11]. While these techniques are applied to empirical objectives, in the context of learning it is natural to study how different numerical solutions affect statistical accuracy. Interestingly, it is now clear that there is a whole set of problems and approaches where computational savings do not lead to any degradation in terms of learning performance [39, 4, 7, 50, 28, 40, 12].
Here, we follow this line of research and study an instance of regularized empirical risk minimization where, given a fixed high– possibly infinite– dimensional hypothesis space, the search for a solution is restricted to a smaller– possibly random– subspace. This is equivalent to considering sketching operators , or equivalently regularization with random projections . For infinite dimensional hypothesis spaces, it includes Nyström methods used for kernel methods  and Gaussian processes . Recent works in supervised statistical learning has focused on smooth loss functions [39, 3, 31], whereas here we consider convex, Lipschitz but possibly non smooth losses. The lack of smoothness requires using different technical tools for the analysis. While for smooth losses linear algebra and matrix concentration can be used , here we need machinery from empirical processes . In particular, fast rates require considering localized complexity measures [48, 5, 26].
Our main interest is characterizing the relation between computational efficiency and statistical accuracy. We do so studying the interplay between regularization, subspace size and different parameters describing how are hard or easy is the considered problem. Indeed, our analysis starts from basic assumption, that eventually we first strengthen to get faster rates, and then weaken to consider more general scenarios. Our results show that also for convex, Lipschitz losses there are settings in which the best known statistical bounds can be obtained while substantially reducing computational requirements. Interestingly, these effects are relevant but also less marked than for smooth losses. In particular, some form of adaptive sampling seems needed to ensure no loss of accuracy and achieve sharp learning bounds. In contrast, uniform sampling suffices to achieve similar results for smooth loss functions. It is an open question whether this is a byproduct of our analysis, or a fundamental limitation. Some preliminary numerical results complemented with numerical experiments are given considering benchmark datasets.
2 Statistical learning with ERM
Let be random variables in
be random variables in, with distribution satisfying the following conditions.
The space is a real separable Hilbert space with scalar product , is a Polish space, and there exists such that almost surely.
Since is bounded, the covariance operator given by can be shown to be self-adjoint, positive and trace class with . We can think of and as input and output spaces, respectively, and some examples are relevant.
We denote by the loss function. Given a function on with values in , we view as the error made predicting by . We make the following assumption.
Assumption 2 (Lipschitz loss).
The loss function is convex and Lipschitz in its second argument, namely there exists such that for all and ,
Example 2 (Hinge loss & other loss functions).
The main example we have in mind is the hinge loss with , which is convex but not differentiable, and for which and . Another example is the logistic loss , for which and .
Given a loss, the corresponding expected risk is for all
and can be easily shown to be convex and Lipschitz continuous.
In this setting, we are interested in the problem of solving
when the distribution is known only through a training set of independent samples . Since we only have the data , we cannot solve the problem exactly and given an empirical approximate solution , a natural error measure is the the excess risk , which is a random variable through its dependence on , and hence on the data. In the following we are interested in characterizing its distribution for finite sample sizes. Next we discuss how approximate solutions can be obtained from data.
2.1 Empirical risk minimization (ERM)
A natural approach to derive approximate solutions is based on replacing the expected risk with the empirical risk defined for all as
We consider regularized empirical risk minimization (ERM) based on the solution of the problem,
The expression of the coefficient depends on the considered loss function. Next, we comment on different approaches to obtain a solution when is the hinge loss. We add one remark first.
Remark 1 (Constrained ERM).
A related approach is based on considering the problem
Minimizing (3) can be seen as a Lagrange multiplier formulation of the above problem. While these problems are equivalent , the exact correspondence is implicit. As a consequence their statistical analysis differ. We primarily discuss Problem (3), but also analyze Problem (5) in Appendix H.
2.2 Computations with the hinge loss
Minimizing (3) can be solved in many ways and we provide some basic considerations. If is finite dimensional, iteratively via gradient methods can be used. For example, the subgradient method  applied to (3) is given, for some suitable and step-size sequence , by
where is the subgradient of the map evaluated at , see also . The corresponding iteration cost is in time and memory. Clearly, other variants can be considered, for example adding a momentum term , stochastic gradients and minibatching or considering other approaches for example based on coordinate descent . When is infinite dimensional a different approach is possible, provided can be computed for all . For example, it is easy to prove by induction that the iteration in (6) satisfies , where
and where is the canonical basis in . The cost of the above iteration is for computing , where is the cost of evaluating one inner product. Also in this case, a number of other approaches can be considered, see e.g. [48, Chap. 11] and references therein. We illustrate the above ideas for the hinge loss.
3 ERM on random subspaces
In this paper, we consider a variant of ERM based on considering a subspace and the corresponding regularized ERM problem,
with a subset of the input points. A basic idea we consider is to sample the points uniformly at random. Another more refined choice we consider is sampling exactly or approximately (see Definition 1 in the Appendix) according to the leverages scores 
While leverage scores computation is costly, approximate leverage scores (ALS) computation can be done efficiently, see  and references therein.
Following , other choices are possible.
Indeed for any and we could consider and derive a formulation as in (11) replacing with the matrix with rows .
We leave this discussion for future work.
Here, we focus on the computational benefits of considering
ERM on random subspaces and analyze the corresponding statistical
The choice of as in (9) allows to improve computations with respect to (4). Indeed, is equivalent to the existence of s.t. , so that we can replace (8) with the problem
Further, since is symmetric and positive semi-definite, we can derive a formulation close to that in (3), considering the reparameterization which leads to,
where for all , we defined the embedding . Note that this latter operation only involves the inner product in and hence can be computed in time. The subgradient method for (11) has a cost per iteration. In summary, we obtained that the cost for the ERM on subspaces is and should be compared with the cost of solving (7) which is The corresponding costs to predict new points are and , while the memory requirements are and , respectively. Clearly, memory requirements can be reduced recomputing things on the fly. As clear from the above discussion, computational savings can be drastic, as long as , and the question arises of how this affect the corresponding statistical accuracy. Next section is devoted to this question.
4 Statistical analysis of ERM on random subspaces
We divide the presentation of the results in three parts. First, we consider a setting where we make basic assumptions. Then, we discuss improved results considering more benign assumptions. Finally, we describe general results covering also less favorable conditions. In all cases, we provide simplified statements for the results, omitting numerical constants, logarithmic and higher order terms, for ease of presentation. The complete statements and the proofs are provided in the appendices.
4.1 Basic setting
In this section, we only assume the best in the model to exist.
There exists such that .
We first provide some benchmark results for regularized ERM under this assumption.
Theorem 1 (Regularized ERM).
The proof of the above result is given in Appendix A. It shows the excess risk bound for regularized ERM arises from a tradeoff between an estimation and an approximation term. While this result can be derived specializing more refined analysis, see e.g.  or later sections, we provide a novel self-contained proof which is of interest in its own right. Similar bounds in high-probability for ERM constrained to the ball of radius can be obtained through a uniform convergence argument over such balls, see [6, 33, 24]. In order to apply this to regularized ERM, one could in principle use the fact that by Assumption 2, (see Appendix) , but this yields a suboptimal dependence in .  address this issue by considering ERM with both constraint over a ball and regularization, and derives a high-probability bound. Finally, a similar rate for , though only in expectation, can be derived through a stability argument [9, 43]. The proof of our high-probability bound builds on uniform convergence, but circumvents using a crude bound on by instead exploiting its definition as a regularized minimizer and bounding . We use this bound as a reference and to derive results for regularized ERM on subspaces. We first consider any fixed subspace and then specialize to suitable random subspaces. Given , let be the minimizer of (8), the orthogonal projection on , and
Theorem 2 (Regularized ERM on subspaces).
Compared to Theorem 1, the above result shows that there is an extra approximation error term due to considering a subspace. The coefficient appears in the analysis also for other loss functions, see e.g. [39, 31]. Roughly speaking, it captures how well the subspace is adapted to the problem. We next develop this reasoning, specializing the above result to a random subspace as in (9). Note that, if is random then is a random variable through its dependence on and on . We denote by the unique minimizer of on and by the corresponding projection. Further, it is also useful to introduce the so-called effective dimensions [57, 13, 39]. We denote by the distribution of , with its support111Namely, the smallest closed subset of with -measure , well-defined since is a Polish space . , and define for
Then, is finite since is trace class, and
is finite since is bounded.
Further, we denote by
the strictly positive eigenvalues of
the strictly positive eigenvalues of, with eigenvalues counted with respect to their multiplicity and ordered in a non-increasing way. We borrow the following results from .
Proposition 1 (Uniform and leverage scores sampling).
Fix and . With probability at least
provided that or for uniform and ALS sampling, respectively.
Moreover, if the spectrum of has a polynomial decay, i.e. for some
then (13) holds if or for uniform and ALS sampling, respectively.
Combining the above proposition with Theorem 2 we have the following.
Theorem 3 (Uniform and leverage scores sampling under eigen-decay).
The above results show that it is possible to achieve the same rate of standard regularized ERM, but to do so uniform sampling does not seem to provide a computational benefit. As clear from the proof, computational benefits for smaller subspace dimension would lead to worse rates. This behavior is worse than that allowed by smooth loss functions [39, 31].
These results can be recovered with our approach. Indeed, for both least squares and self-concordant losses, the bound in Theorem (2) can be easily improved to have a linear dependence on , leading to straightforward improvements.
We will detail this derivation in a longer version of the paper.
Due to space constraints, here we focus on non-smooth losses, since these results, and not only their proof, are new.
For this class of loss functions, Theorem 3 shows that leverage scores sampling can lead to better results depending on the spectral properties of the covariance operator.
Indeed, if there is a fast eigendecay, then using leverage scores and a subspace dimension one can achieve the same rates as exact ERM.
For fast eigendecay ( small), the subspace dimension can decrease dramatically. For example, as a reference for then suffices. Note that, other decays, e.g. exponential, could also be considered. These observations are consistent with recent results for random features [4, 28, 50], while they seem new for ERM on subspaces. Compare to random features the proof techniques have similarities but also differences due to the fact that in general random features do not define subspaces. Finding a unifying analysis would be interesting, but it is left for future work. Also, we note that uniform sampling can have the same properties of leverage scores sampling, if . This happens under the strong assumptions on the eigenvectors of the covariance operator, but can also happen in kernel methods with kernels corresponding to Sobolev spaces
. This happens under the strong assumptions on the eigenvectors of the covariance operator, but can also happen in kernel methods with kernels corresponding to Sobolev spaces. With these comments in mind, here, we focus on subspace defined through leverage scores noting that the assumption on the eigendecay not only allows for smaller subspace dimensions, but can also lead to faster learning rates. Indeed, we study this next.
4.2 Fast rates
To exploit the eigendecay assumption and derive fast rates, we begin considering further conditions on the problem. We relax these assumptions in the next section. First, we let for -almost all
where is the conditional distribution 222The conditional distribution always exists since is separable and is a Polish space , of given and make the following assumption.
There exists such that, almost surely,
In our context, this is the same as requiring the model to be well specified. Second, following , we consider a loss that can be clipped at that is such that for all ,
where denotes the clipped value of at , that is
If , denotes the non-linear function . This assumption holds for hinge loss with , and for bounded regression. Finally, we make the following assumption on the loss.
Assumption 5 (Simplified Bernstein condition).
There are constants , such that for all ,
This is a standard assumption to derive fast rates for ERM [48, 5]. In classification with the hinge loss, it is implied by standard margin conditions characterizing classification noise, and in particular by hard margin assumptions on the data distribution [2, 52, 32, 48]. As discussed before, we next focus on subspaces defined by leverage scores and derive fast rates under the above assumptions.
The above result is a special case of the analysis in the next section, but it is easier to interpret. Compared to Theorem 3 the assumption on the spectrum also leads to an improved estimation error bound and hence improved learning rates. In this sense, these are the correct estimates since the decay of eigenvalues is used both for the subspace approximation error and the estimation error. As is clear from (20), for fast eigendecay, the obtained rate goes from to . Taking again, leads to a rate which is better than the one in Theorem 3. In this case, the subspace defined by leverage scores needs to be chosen of dimension at least . Note that again, the subspace dimension is even smaller for faster eigendecay. Next, we extend these results considering weaker, more general assumptions.
4.3 General analysis
Last, we give a general analysis relaxing the above assumptions. We replace Assumption 4 by
and introduce the approximation error,
Condition (21) may be relaxed at the cost of an additional approximation term, but the analysis is lengthier and is postponed. It has a natural interpretation in the context of kernel methods, see Example 1, where it is satisfied by universal kernels . Regarding the approximation error, note that, if exists then , and we can recover the results in Section 4.1. More generally, the approximation error decreases with and learning rates can be derived assuming a suitable decay. Further, we consider a more general form of the Bernstein condition.
Assumption 6 (Bernstein condition).
There exist constants , and , such that for all , the following inequalities hold almost surely:
Again in classification, the above condition is implied by margin conditions, and the parameter characterizes how easy or hard is the classification problem. The strongest assumption is choosing , with which we recover the result in the previous section. Then, we have the following result.
The proof of the above bound follows combining Proposition 1 with results to analyze the learning properties of regularized ERM with kernels . While general, the obtained bound is harder to parse. For the bound become vacuous and there are not enough assumptions to derive a bound . Taking gives the best bound, recovering the result in the previous section when . Note that large values of are prevented, indicating a saturation effect (see [53, 34]). As before the bound improves when there is a fast eigendecay. Taking we recover the previous bounds, whereas smaller lead to worse bounds. Since, given any acceptable choice of and , the quantity takes values in , the best rate, that differently from before can also be slower than , can always be achieved choosing (up to logarithmic terms).
We report simple experiments in the context of kernel methods, considering
Nyström method. In particular, we choose the hinge loss, hence SVM for classification.
Nyström-Pegasos. Classic SVM implementations with hinge loss are based on considering a dual formulation and a quadratic programming problem . This is the case for example, for the LibSVM library  available on Scikit-learn . We use this implementation for comparison, but find it convenient to combine the Nyström method to a primal solver akin to (6) (see [29, 20] for the dual formulation). More precisely, we use Pegasos  which is based on a simple and easy to use stochastic subgradient iteration333Python implementation from https://github.com/ejlb/pegasos. We consider a procedure in two steps. First, we compute the embedding discussed in Section 3. With kernels it takes the form where with . Second, we use Pegasos on the embedded data. As discussed in Section 3, the total cost is in time (here , i.e. one epoch equals steps of stochastic subgradient) and in memory (needed to compute the pseudo-inverse and embedding the data in batches of size ).
Datasets & set up (see Appendix I). We consider five datasets444Datasets available from LIBSVM website http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ and from  http://manikvarma.org/code/LDKL/download.html#Jose13 of size , challenging for standard SVM implementations. We use a Gaussian kernel, tuning width and regularization parameter as explained in appendix. We report classification error and for data sets with no fixed test set, we set apart of the data.
Results We compare with linear and kernel SVM see Table 1. For all the data sets, the Nyström-Pegasos approach provides comparable performances with much better times (except for the small-size Usps). Note that K-SVM cannot be run on millions of points, whereas Nyström-Pegasos is still fast and provides much better results than linear SVM. Finally, in Figure 1 we illustrate the interplay between and for the Nyström-Pegasos considering SUSY data set.
|Datasets||c-err||c-err||t train||t pred||c-err||t train||t pred|
and the results are reported as mean and standard deviation deriving from 5 independent runs of the algorithm. The columns of the table report classification error, training time and prediction time.
This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216, and the Italian Institute of Technology. We gratefully acknowledge the support of NVIDIA Corporation for the donation of the Titan Xp GPUs and the Tesla k40 GPU used for this research.
Part of this work has been carried out at the Machine Learning Genoa (MaLGa) center, Università di Genova (IT).
L. R. acknowledges the financial support of the European Research Council (grant SLING 819789), the AFOSR projects FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), and the EU H2020-MSCA-RISE project NoMADS - DLV-777826.
A. Alaoui and M. W. Mahoney.
Fast randomized kernel ridge regression with statistical guarantees.In Advances in Neural Information Processing Systems, pages 775–783, 2015.
J.-Y. Audibert and A. B. Tsybakov.
Fast learning rates for plug-in classifiers.The Annals of Statistics, 35(2):608–633, 2007.
-  F. Bach. Sharp analysis of low-rank kernel matrix approximations. In Conference on Learning Theory, pages 185–209, 2013.
-  F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. The Journal of Machine Learning Research, 18(1):714–751, 2017.
-  P. L. Bartlett, O. Bousquet, S. Mendelson, et al. Local rademacher complexities. The Annals of Statistics, 33(4):1497–1537, 2005.
-  P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
-  L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in neural information processing systems, pages 161–168, 2008.
-  S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, Oxford, 2013.
-  O. Bousquet and A. Elisseeff. Stability and generalization. Journal of machine learning research, 2(Mar):499–526, 2002.
-  S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
-  D. Calandriello, A. Lazaric, and M. Valko. Distributed adaptive sampling for kernel matrix approximation. arXiv preprint arXiv:1803.10172, 2018.
D. Calandriello and L. Rosasco.
Statistical and computational trade-offs in kernel k-means.In Advances in Neural Information Processing Systems, pages 9357–9367, 2018.
-  A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368, 2007.
-  C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
-  M. B. Cohen, Y. T. Lee, C. Musco, C. Musco, R. Peng, and A. Sidford. Uniform sampling for matrix approximation. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 181–190, 2015.
L. Devroye, L. Györfi, and G. Lugosi.
A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013.
-  P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. Journal of Machine Learning Research, 13(Dec):3475–3506, 2012.
-  P. Drineas and M. W. Mahoney. On the nyström method for approximating a gram matrix for improved kernel-based learning. journal of machine learning research, 6(Dec):2153–2175, 2005.
-  E. Giné and J. Zinn. Some limit theorems for empirical processes. The Annals of Probability, pages 929–989, 1984.
-  C.-J. Hsieh, S. Si, and I. S. Dhillon. Fast prediction for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 3689–3697, 2014.
-  T. Joachims. Making large-scale svm learning practical. Technical report, Technical Report, 1998.
-  R. Johnson and T. Zhang. In Advances in neural information processing systems, pages 315–323, 2013.
-  C. Jose, P. Goyal, P. Aggrwal, and M. Varma. Local deep kernel learning for efficient non-linear svm prediction. In International conference on machine learning, pages 486–494, 2013.
-  S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in Neural Information Processing Systems 21, pages 793–800, 2009.
-  V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems, volume 2033 of École d’Été de Probabilités de Saint-Flour. Springer-Verlag Berlin Heidelberg, 2011.
-  V. Koltchinskii et al. Local rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593–2656, 2006.
-  S. Kpotufe and B. K. Sriperumbudur. Kernel sketching yields kernel jl. arXiv preprint arXiv:1908.05818, 2019.
-  Z. Li, J.-F. Ton, D. Oglic, and D. Sejdinovic. Towards a unified analysis of random fourier features. arXiv preprint arXiv:1806.09178, 2018.
Z. Li, T. Yang, L. Zhang, and R. Jin.
Fast and accurate refined nyström-based kernel svm.
Thirtieth AAAI Conference on Artificial Intelligence, 2016.
-  M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends® in Machine Learning, 3(2):123–224, 2011.
-  U. Marteau-Ferey, D. Ostrovskii, F. Bach, and A. Rudi. Beyond least-squares: Fast rates for regularized empirical risk minimization through self-concordance. arXiv preprint arXiv:1902.03046, 2019.
-  P. Massart, É. Nédélec, et al. Risk bounds for statistical learning. The Annals of Statistics, 34(5):2326–2366, 2006.
-  R. Meir and T. Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine Learning Research, 4(Oct):839–860, 2003.
-  N. Mücke, G. Neu, and L. Rosasco. Beating sgd saturation with tail-averaging and minibatching. In Advances in Neural Information Processing Systems, pages 12568–12577, 2019.
-  Y. Nesterov. Lectures on convex optimization, volume 137. Springer, 2018.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830, 2011.
-  R. T. Rockafellar. Convex analysis. Number 28. Princeton university press, 1970.
-  A. Rudi, D. Calandriello, L. Carratino, and L. Rosasco. On fast leverage score sampling and optimal learning. In Advances in Neural Information Processing Systems, pages 5672–5682, 2018.
-  A. Rudi, R. Camoriano, and L. Rosasco. Less is more: Nyström computational regularization. In Advances in Neural Information Processing Systems, pages 1657–1665, 2015.
-  A. Rudi and L. Rosasco. Generalization properties of learning with random features. In Advances in Neural Information Processing Systems 30, pages 3215–3225, 2017.
-  M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162(1-2):83–112, 2017.
B. Schölkopf, R. Herbrich, and A. J. Smola.
A generalized representer theorem.
International conference on computational learning theory, pages 416–426. Springer, 2001.
-  S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11(Oct):2635–2670, 2010.
-  S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 127(1):3–30, 2011.
-  S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(Feb):567–599, 2013.
-  A. J. Smola and B. Schölkopf. Sparse greedy matrix approximation for machine learning. 2000.
-  K. Sridharan, S. Shalev-Shwartz, and N. Srebro. Fast rates for regularized objectives. In Advances in Neural Information Processing Systems 21, pages 1545–1552, 2009.
-  I. Steinwart and A. Christmann. Support vector machines. Springer Science & Business Media, 2008.
-  I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), pages 79–93, 2009.
-  Y. Sun, A. Gilbert, and A. Tewari. But how does it work in theory? linear svm with random features. In Advances in Neural Information Processing Systems, pages 3379–3388, 2018.
-  J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
-  A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135–166, 2004.
-  E. D. Vito, L. Rosasco, A. Caponnetto, U. D. Giovannini, and F. Odone. Learning from examples as an inverse problem. Journal of Machine Learning Research, 6(May):883–904, 2005.
-  G. Wahba. Spline models for observational data, volume 59. Siam, 1990.
-  C. K. Williams and M. Seeger. Using the nyström method to speed up kernel machines. In Advances in neural information processing systems, pages 682–688, 2001.
-  D. P. Woodruff. Sketching as a tool for numerical linear algebra. arXiv preprint arXiv:1411.4357, 2014.
-  T. Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural Computation, 17(9):2077–2098, 2005.
Appendix A Proof of Theorem 1
This section is devoted to the proof of Theorem 1. In the following we restrict to linear functions, i.e for some and, with slight abuse of notation we set
With this notation . The Lipschitz assumption implies that is almost surely Lipschitz in its argument, with Lipschitz constant .
Specifically, we will show the following:
The proof starts with the following bound on the generalization gap uniformly over balls. While this result is well-known and follows from standard arguments (see, e.g., [6, 25]), we include a short proof for completeness.
Proof of Lemma 1.
where we used that , and that and have the same distribution, as well as and . The last term corresponds to the Rademacher complexity of the class of functions [6, 25]. Now, using that for , where is -Lipschitz by Assumption 2, Ledoux-Talagrand’s contraction inequality for Rademacher averages  gives
where we used that for by independence, and that almost surely (Assumption 1). Hence,
To write the analogous bound in high probability we apply McDiarmid’s inequality . We know that given , and defining we have
using the Assumption 1 of boundedness of the input. Hence, by McDiarmid inequality:
taking so that , we obtain the desired bound (27). ∎