Gradient Descent in RKHS with Importance Labeling

Labeling cost is often expensive and is a fundamental limitation of supervised learning. In this paper, we study importance labeling problem, in which we are given many unlabeled data and select a limited number of data to be labeled from the unlabeled data, and then a learning algorithm is executed on the selected one. We propose a new importance labeling scheme and analyse the generalization error of gradient descent combined with our labeling scheme in least squares regression in Reproducing Kernel Hilbert Spaces (RKHS). We show that the proposed importance labeling leads to much better generalization ability than uniform one under near interpolation settings. Numerical experiments verify our theoretical findings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/12/2018

Classification from Pairwise Similarity and Unlabeled Data

One of the biggest bottlenecks in supervised learning is its high labeli...
10/16/2019

Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Cost

Active learning (AL) integrates data labeling and model training to mini...
09/10/2018

Beyond the Selected Completely At Random Assumption for Learning from Positive and Unlabeled Data

Most positive and unlabeled data is subject to selection biases. The lab...
03/15/2012

Parameter-Free Spectral Kernel Learning

Due to the growing ubiquity of unlabeled data, learning with unlabeled d...
06/03/2019

Adversarially Robust Generalization Just Requires More Unlabeled Data

Neural network robustness has recently been highlighted by the existence...
02/22/2021

Kernel quadrature by applying a point-wise gradient descent method to discrete energies

We propose a method for generating nodes for kernel quadrature by a poin...
10/17/2021

Explaining generalization in deep learning: progress and fundamental limits

This dissertation studies a fundamental open challenge in deep learning ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the most popular task in machine learning is supervised learning, in which we estimate a function that maps an input to its label based on finite labeled examples called training data. The goodness of the learned function is measured by the generalization ability, that is roughly the accuracy of the learned function for previously unseen data. Statistical learning theory is a powerful tool which gives a framework for analysing the generalization errors of learning algorithms

(Vapnik and Vapnik, 1998). Enormous learning algorithms have been proposed and their generalization abilities are analysed in various settings.

In spite of the great successes of supervised learning, it has a fundamental limitation due to the expensive cost for making training examples. Particularly, it is often the case that collecting input data is cheap but to give labels of them is limited or expensive and that is one of bottlenecks in supervised learning (Roh et al., 2019). The dilemma is that the more labeled data, better generalization ability is guaranteed but the higher labeling cost is incurred.

In this limited situation, importance labeling

problem naturally arises, which is a special case of active learning

(Settles, 2009). In the importance labeling settings, we first collect many unlabeled examples. Then we choose a limited number of examples to be labeled from unlabeled ones. The most naive selection of labeled examples is based on uniform subsampling from unlabeled data. What we expect here is that if we choose labeled samples effectively, then better generalization ability may be acquired.

Despite of the significance of the problem, theoretical aspects of importance labeling is little known. The essential question is what importance labeling scheme surpasses the standard uniform labeling in what settings.

In this paper, we consider this quite general question in the context of least squares regression in Reproducing Kernel Hilbert Spaces (RKHS). Kernel method is classical and promising approach for learning nonlinear functions (Schölkopf et al., 2002). In kernel method, input data is mapped to (potentially) infinite dimensional feature space and then a linear predictor on the feature space is learned. The feature space is determined by the user-defined kernel function and numerous kernel functions are known, e.g. classical gaussian kernel and more modern neural tangent kernel (NTK) (Jacot et al., 2018). Least squares regression in RKHS has long history and its generalization ability has been thoroughly studied in supervised learning settings (Caponnetto and De Vito, 2007; Steinwart et al., 2009; Rosasco and Villa, 2015; Dieuleveut et al., 2016; Rudi and Rosasco, 2017).

Main Contributions

  • We propose CRED, that is a new importance labeling scheme based on the contribution ratios to effective dimension of unlabeled data.

  • The generalization error of gradient descent with CRED for least squares regression in RKHS is theoretically analysed and the superiority of the algorithm to existing methods is shown under low label noise (i.e., near interpolation) settings.

  • The algorithm and the theoretical results are extended to random features settings and the potential computational intractability of CRED is resolved.

The comparison of theoretical generalization errors between our proposed algorithms with most relevant existing methods is summarised in Table 1.

Method Generalization Error Additional Assumptions
(S)GD (Pillaud-Vivien et al., 2018) a.e.
KTR (Jun et al., 2019) None
SSSL (Ji et al., 2012) , sufficiently large
CRED-GD sufficiently large
RF-KRLS (Rudi and Rosasco, 2017) , sufficiently large
RF-CRED-GD sufficiently large
Table 1: Comparison of theoretical generalization errors between our proposed algorithms and most relevant existing methods. is the number of labeled data,

is the variance of label noise,

is the uniform upper bound of labels. In column "Additional Assumptions," means the number of random features and does the number of unlabeled data. Please refer to Section 2 for the definitions of these parameters. Extra log factors are hided for simplicity, where

is confidence parameter for high probability bounds.

Related Work

Here, we briefly overview the most relevant research areas and methods to our work.

Supervised Learning   Supervised least squares regression in RKHS has been thoroughly studied (Yao et al., 2007; Caponnetto and De Vito, 2007; Steinwart et al., 2009; Rosasco and Villa, 2015; Dieuleveut et al., 2016; Rudi and Rosasco, 2017; Lin and Rosasco, 2017; Carratino et al., 2018; Pillaud-Vivien et al., 2018; Jun et al., 2019). In Caponnetto and De Vito (2007); Steinwart et al. (2009)

, generalization error of kernel ridge regression has been studied and it has been shown that the minimax optimal rate is attained under suitable assumtions. In

(Yao et al., 2007; Rosasco and Villa, 2015)

, gradient descent for kernel ridgeless regression has been considered and the effect of early stopping as implicit regularization has been theoretically justified. The analysis has been further improved with additional assumption about eigenvalues decay

(Lin and Rosasco, 2017)

. Online stochastic gradient descent (SGD) has been studied in

(Dieuleveut et al., 2016) and minimax optimal rate has been established when the true function is (nearly) attainable. Recently the authors of (Pillaud-Vivien et al., 2018) have considered Multi-Pass SGD and shown its optimality without attainability of the true function under additional assumption about the capacity of the feature space in terms of infinity norm. Random features technique (Rahimi and Recht, 2008) can be applicable to kernel regression and reduces the computational time. The generalization ability of kernel regression with random features has been studied in Rudi and Rosasco (2017); Carratino et al. (2018) and it has been shown that random features technique doesn’t hurt the generalization ability when the number of random features is sufficiently large and the true function is attainable. More recently, in (Jun et al., 2019), low label noise cases have been particularly discussed and their proposed Kernel Truncated Randomized Ridge Regression (KTR) achieves an improved rate when the label noise is low.

Semi-Supervised Learning

   Semi-supervised learning has a close relation to importance labeling. In semi-supervised learning, we are given many unlabeled data and small number of labeled data. Typically the labeled data is uniformly selected from unlabeled data. Semi-supervised learning aims to get better generalization ability by the effective use of unlabeled examples typically under so-called cluster assumption

(Balcan and Blum, 2005; Rigollet, 2007; Ben-David et al., 2008; Wasserman and Lafferty, 2008). In contrast, the importance labeling scheme in this paper aims to get better generalization ability by the effective choice of labeled examples without the assumption. In (Ji et al., 2012)

, a simple semi-supervised kernel regression algorithm called SSSR has been proposed and they have shown that the generalization ability surpasses the one of supervised learning when the true function is attainable and deterministic. Roughly speaking, the algorithm first computes eigen-system of covariance operator in the feature space using unlabeled data. Then, linear regression is executed on the principle eigen-functions as features. The theory of SSSR does not require the cluster assumption and is on the standard theoretical settings of kernel regression.

Active Learning   Active learning is also a close concept to importance labeling. In active learning, we are given learned model on small labeled data and then select new labeled data from unlabeled one by utilizing the information of the learned model. In some sense, active learning is a generalized concept of important labeling. However, in active learning, how to select the initially labeled data is out-of-scope and typically assumed to be uniform selection. Enormous active learning strategies have been proposed (Brinker, 2003; Dasgupta, 2005; Yu et al., 2006; Kapoor et al., 2007; Guo and Schuurmans, 2008; Wei et al., 2015; Gal et al., 2017; Sener and Savarese, 2017) ((Settles, 2009) for extensive survey) and empirically studied their performances but their theoretical aspects are little known at least in our kernel regression setting.

Importance Sampling   Importance sampling is a general technique to reduce the variance of estimations and typically used in Monte Carlo methods and stochastic optimization (Needell et al., 2014; Zhao and Zhang, 2015; Alain et al., 2015; Csiba and Richtárik, 2018; Chen et al., 2019). The behind idea is that if the realizations that potentially cause large variance are more frequently sampled, the variance of a bias-corrected estimator can be reduced. But the definition of importance is strongly problem-dependent and to the best of our knowledge, any algorithms for importance labeling problem have not been proposed so far.

2 Problem Settings and Assumptions

In this section, we provide problem settings in this paper and theoretical assumptions for our analysis.

2.1 Kernel Least Squares Regression with Importance Labeling

Let be i.i.d. samples from some distribution , where , and , . We denote as the marginal distribution of on and as the conditional distribution of with respect to . We subsample () from according to user-defined distribution on and we denote , .

The objective of this paper is to minimize the excess risk only using the information of labeled observations , where and is some Reproducing Kernel Hilbert Space (RKHS) with inner product and kernel .

Notation

We denote by the induced norm by and as Euclidean norm. Let and , where the operator is the natural embedding from to and is the adjoint operator of . We define as for operator . For natural number , We denote by .

2.2 Theoretical Assumptions

Assumption 1 (Boundedness of kernel).

for some .

Assumption 2 (Smoothness of true function).

There exists such that for some with (). Here .

Assumption 2 quantifies the complexity of in terms of the eigen-system of . When , becomes a subset of and particularly , it exactly matches to . As , roughly .

Assumption 3 (Polynomial decay of eigenvalues).

There exists such that .

Parameter characterizes the complexity of feature space .

Assumption 4 (Bounded variance and uniform bounededness of labels).

There exists and such that and almost surely.

Generally label noise , but we are particularly interested in the case .

3 Proposed Algorithm

In this section, first we describe our proposed algorithm. Then computational aspects of our algorithm are briefly discussed.

Our proposed algorithm is illustrated in Algorithm 1. The algorithm consists of two blocks importance labeling and optimization by gradient descent.

Importance Labeling   Our proposed importance labeling is based on the contribution ratios to effective dimension (Zhang, 2005). First recall the definition of effective dimension, that is . The essential intuition of our scheme is that input that has a large contribution to effective dimension is much important than other inputs. To realize this intuition, we construct an importance sampling distribution proportional to on unlabeled data. For stability of sampling, we add the mean of contribution ratios on unlabeled data to it. Finally, since covariance operator is unknown, we replace it by empirical covariance operator using unlabeled data.

Optimization by Gradient Descent   The optimization process is similar to the standard gradient descent on labeled data, but each loss is weighed by the inverse labeling probability to guarantee the unbiasedness of the risk and then the gradient of the bias corrected risk is used for updating the solution.

Computational Tractability   Gradient descent on RKHS can be efficiently executed even in infinite dimensional feature spaces thanks to kernel trick. However the computation of the contribution ratios to effective dimension is generally intractable due to the inapplicability of kernel trick (Schölkopf et al., 2002). This computational problem can be avoided by introducing random features technique. For the details, see Section 6.

1:  Set for .
2:  Sample independently according to and get their labels .
3:  Set .
4:  for  to  do
5:     .
6:  end for
7:  return  .
Algorithm 1 CRED-GD(, , )

4 Generalization Error Analysis

Here, we give the main theoretical results of CRED-GD (Algorithm 1). The proofs are found in Section B of the supplementary material. We use and notation to hide extra factors for simple statements, where is a confidence parameter for high probability bounds.

Our analysis starts from bias-variance decomposition , where is GD path on excess risk, i.e., with . The first term is called as bias and can be bounded by the following Lemma:

Lemma 4.1 (Bias bound, simplified version of Lemma a.1).

Suppose that Assumptions 1 and 2 hold. Let be sufficiently small. Then, for any ,

Next, the second term, that is called as variance, can be bounded as follows:

Proposition 4.2 (Variance bound, simplified version of Proposition b.1).

Suppose that be sufficiently small. Let , , and and . Then there exits event with such that

where .

Remark.

Proposition 4.2 is the main novelty of our analysis. In (Pillaud-Vivien et al., 2018), the variance bound of the standard GD is roughly in our settings. In contrast, our bound is roughly for and sufficiently large . Since holds, CRED-GD improves the variance bound of the standard GD when is small. Later, we discuss the case (see Lemma 4.3 and Section 5).

Lemma 4.3.

Suppose that Assumption 1 holds. For any , . Additionally, under Assumption 3, for any , .

Remark.

In (Pillaud-Vivien et al., 2018), under Assumption 1 and additional assumption for some and , the authors have shown that (Lemma 13 in (Pillaud-Vivien et al., 2018)), which is a better bound than ours in Lemma 4.3 when . However, in worst case their bound matches to ours in Lemma 4.3. For an example of this case, see Section 5.

For balancing the bias and variance term, we introduce a notion of the optimal number of iterations:

Definition 4.1 (Optimal number of iterations).

Optimal number of iterations for CRED-GD is defined by , where is defined as

where .

Lemma 4.3 and Proposition 4.2 with yields the following main theorem:

Theorem 4.4 (Generalization Error of CRED-GD).

Suppose that Assumptions 1, 2, 3 and 4 hold. Let be sufficiently small and . Then setting , , there exists event with such that CRED-GD satisfies , where is defined in Definition 4.1.

Wider Optimality on General Noise Settings When , the generalization error of CRED-GD with sufficiently many unlabeled data becomes the optimal rate . The same rate is also achieved by supervised GD or SGD but under restrictive condition in our theoretical settings (Dieuleveut et al., 2016; Pillaud-Vivien et al., 2018), which is not necessary for CRED-GD.

Low Noise Acceleration When , the rate of CRED-GD with sufficiently many unlabeled data becomes . In contrast, supervised GD or SGD only achieves in our theoretical settings when , and thus CRED-GD significantly improves the generalization ability of supervised methods. Semi-supervised method SSSL (Ji et al., 2012) only achieves when and , which is worse than ours.

Equivalence to Kernel Ridge Regression with Importance Labeling   Using very similar arguments of our analysis, it can be shown that analytical kernel ridge regression solution also achieves the generalization error bound in Theorem 4.4 (see Section C of supplementary material). When is extremely small, the analytical solution is computationally cheap than gradient descent and sometimes useful.

5 Sufficient Condition for

In this section, we give a sufficient condition for and its simple example. The proofs are found in Section D of the supplementary material.

Proposition 5.1.

Let () be the eigen-system of in , where . Assume that and for any for some and . Moreover if , we additionally assume for any for some . Then Assumption 1 is satisfied and for any ,

Example.

Let and

, that is the product measure of truncated normal distributions with mean

and scale parameter , i.e., independent normal distributions with mean and variance conditioned on . Let . We denote as the variance of for . Note that for sufficiently small , we have for any . Then we particularly consider linear kernel and thus . Since the covariance matrix is , the eigen-system of in is , where for . Suppose that the polynomial decay of holds: . Then from Lemma 4.3, . On the other hand, from Proposition 5.1 with , we have .

6 Extension to Random Features Settings

In this section, we discuss the application of random features technique to Algorithm 1 for computational tractability. Then we theoretically analyse the generalization error of the algorithm. The proofs are given in Section E of the supplementary material.

Suppose that kernel has an integral representation for for some . Random features , where independently, is used for an approximation of by . Here, the number of random features is a user-defined parameter and characterizes the goodness of the approximation.

Algorithm

The random features version of CRED-GD is illustrated in Algorithm 2. The difference from Algorithm 1 is only the replacement of to random features . Note that we can properly compute important labeling distribution using standard SVD solvers thanks to random features technique.

1:  Sample independently.
2:  Set for .
3:  Set for , where .
4:  Sample independently according to and get their labels .
5:  Set .
6:  for  to  do
7:     .
8:  end for
9:  return  .
Algorithm 2 RF-CRED-GD(, , , )

We need the following additional assumption for theoretical analysis:

Assumption 5.

.

We define by and by the adjoint of . Then we denote and .

Generalization Error Analysis

We consider generalization error . We decompose the generalization error to bias and variance:

where where is the path of GD with RF on excess risk, i.e., with .The bias term can be bounded similar to Lemma 4.1:

Lemma 6.1 (Bias bound for RF setting, simplified version of Lemma e.1).

Suppose that Assumptions 2 and 5 hold. Let be sufficiently small and such that . Then for any , with probability at least ,

Remark.

Compared to Lemma 4.1, additional condition is assumed. This implies that to make bias small, appropriately large number of random features is required.

The variance conditioned on random features can be bounded in a perfectly similar manner to the proof of Proposition 4.2 with replacing by and by . The latter is trivially bounded by . The key lemma for bounding is the following:

Lemma 6.2 (Proposition 10 in (Rudi and Rosasco, 2017)).

Suppose that Assumption 5 holds. We denote for . For any and sufficiently small , if , with probability at least it holds that

Combining the bias and variance bounds with Lemma 6.2 yields the following theorem:

Theorem 6.3 (Generalization error of CRED-GD with RF, simplified version of Theorem e.3).

Suppose that Assumptions 2, 3, 4 and 5 hold. Let be sufficiently small, and . For any , if , there exists event with such that RF-CRED-GD has the same generalization error bounds as CRED-GD in Theorem 4.4.

7 Numerical Experiments

In this section, numerical results are provided to empirically verify our theoretical findings.

Experimental Settings   In our experiments, the input data of public datasets MNIST and Fashion MNIST (Xiao et al., 2017) were used. First we randomly split each dataset into train () and test () and normalized input data by dividing

. We conducted both linear regression (LR) and nonlinear regression (NLR) tasks. For linear tasks, we used the original inputs with bias as features. For nonlinear tasks, we used a randomly initialized three hidden layered fully connected ReLU network with width

without output layer as features. Here, the random weights were from i.i.d. standard normal distributions. Then we randomly generated true linear functions111Each true function was generated by , where and was the eigen-system of the covariance matrix in correspondence feature space.on the feature spaces and then did noised labels based on them, where the noises were from i.i.d. normal distributions with mean and variance . We compared our proposed method222As we mentioned before, we used very small synthetic label noise in some experiments and then the convergence speed of gradient descent was sometimes quite slow. Hence we decided that optimization methods were replaced with analytical methods. As we pointed out in the end of Section 4, the same generalization error bound is guaranteed for the analytical solution.with KRR (Kernel Ridge Regression), KTR (Jun et al., 2019) and SSSR (Ji et al., 2012). The hyper-parameters were fairly and reasonably determined.333CRED has hyper-parameter and selecting best one requires additional labeling. In our experiments, we recorded the best test error by trying in . This potentially violates the fair comparison with the other methods because CRED implicitly uses ten patterns of labeled data. Hence we decided that the other methods were ran ten times with independent uniform labeling and then the best test error was recorded as one experimental trial. The train data was used as unlabeled data and the labeled data was selected from it. The number of labeled data was ranged in . We independently ran each experiment five times and recorded the median of test RMSE on each setting.

(a) LR on MNIST
(b) LR on Fashion
(c) NLR on MNIST
(d) NLR on Fashion
(e) LR on MNIST
(f) LR on Fashion
(g) NLR on MNIST
(h) NLR on Fashion
(i) LR on MNIST
(j) LR on Fashion
(k) NLR on MNIST
(l) NLR on Fashion
Figure 1: Comparisons of test RMSE of our method with existing methods on linear and nonlinear regression tasks. The first column depicts the results of linear regression task on MNIST. The second column does the ones of linear regression on Fashion MNIST. The third column does the ones of nonlinear regression on MNIST. The last column does the ones of nonlinear regression on Fashion MNIST. From top to bottom, the number of labeled data increases from to .

Results  Figure 1 shows the comparisons of test RMSE of our proposed method with previous methods. For the all cases, our method consistently outperformed the other methods. Particularly when the label noise is small, our method achieves much smaller test RMSE than the other methods.

Conclusion and Future Work

In this paper, we proposed a new importance labeling scheme called CRED. The generalization error of GD with CRED was theoretically analysed and much better bound than previous methods was derived when label noise is small. Further, the algorithm and analysis were extended to random features settings and computational intractability of CRED was resolved. Finally, we provided numerical comparisons with existing methods. The numerical results showed empirical superiority to the other methods and verified our theoretical findings.

One direction of future work would be an application of our importance labeling idea to deep learning. Since the feature space of a deep neural network is updated in training time, our importance labeling scheme can be naturally extended to active learning settings. The theoretical and empirical study of the application of our importance labeling idea to active learning of deep neural networks is a promising future work.

Acknowledgement

TS was partially supported by JSPS KAKENHI (18K19793, 18H03201, and 20H00576), Japan DigitalDesign, and JST CREST.

References

  • G. Alain, A. Lamb, C. Sankar, A. Courville, and Y. Bengio (2015) Variance reduction in sgd by distributed importance sampling. arXiv preprint arXiv:1511.06481. Cited by: §1.
  • M. Balcan and A. Blum (2005) A pac-style model for learning from labeled and unlabeled data. In

    International Conference on Computational Learning Theory

    ,
    pp. 111–126. Cited by: §1.
  • S. Ben-David, T. Lu, and D. Pál (2008) Does unlabeled data provably help? worst-case analysis of the sample complexity of semi-supervised learning.. In COLT, pp. 33–44. Cited by: §1.
  • K. Brinker (2003)

    Incorporating diversity in active learning with support vector machines

    .
    In Proceedings of the 20th international conference on machine learning (ICML-03), pp. 59–66. Cited by: §1.
  • A. Caponnetto and E. De Vito (2007) Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics 7 (3), pp. 331–368. Cited by: §1, §1.
  • L. Carratino, A. Rudi, and L. Rosasco (2018) Learning with sgd and random features. In Advances in Neural Information Processing Systems, pp. 10192–10203. Cited by: §1.
  • B. Chen, Y. Xu, and A. Shrivastava (2019) Fast and accurate stochastic gradient estimation. In Advances in Neural Information Processing Systems, pp. 12339–12349. Cited by: §1.
  • D. Csiba and P. Richtárik (2018) Importance sampling for minibatches. The Journal of Machine Learning Research 19 (1), pp. 962–982. Cited by: §1.
  • S. Dasgupta (2005) Analysis of a greedy active learning strategy. In Advances in neural information processing systems, pp. 337–344. Cited by: §1.
  • A. Dieuleveut, F. Bach, et al. (2016) Nonparametric stochastic approximation with large step-sizes. The Annals of Statistics 44 (4), pp. 1363–1399. Cited by: §1, §1, §4.
  • Y. Gal, R. Islam, and Z. Ghahramani (2017) Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1183–1192. Cited by: §1.
  • Y. Guo and D. Schuurmans (2008) Discriminative batch mode active learning. In Advances in neural information processing systems, pp. 593–600. Cited by: §1.
  • A. Jacot, F. Gabriel, and C. Hongler (2018) Neural tangent kernel: convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571–8580. Cited by: §1.
  • M. Ji, T. Yang, B. Lin, R. Jin, and J. Han (2012) A simple algorithm for semi-supervised learning with improved generalization error bound. arXiv preprint arXiv:1206.6412. Cited by: §1, Table 1, §4, §7.
  • K. Jun, A. Cutkosky, and F. Orabona (2019) Kernel truncated randomized ridge regression: optimal rates and low noise acceleration. In Advances in Neural Information Processing Systems, pp. 15332–15341. Cited by: §1, Table 1, §7.
  • A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell (2007) Active learning with gaussian processes for object categorization. In

    2007 IEEE 11th International Conference on Computer Vision

    ,
    pp. 1–8. Cited by: §1.
  • J. Lin and L. Rosasco (2017) Optimal rates for multi-pass stochastic gradient methods. The Journal of Machine Learning Research 18 (1), pp. 3375–3421. Cited by: Lemma A.1, Lemma E.1, §1.
  • D. Needell, R. Ward, and N. Srebro (2014) Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Advances in neural information processing systems, pp. 1017–1025. Cited by: §1.
  • L. Pillaud-Vivien, A. Rudi, and F. Bach (2018) Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Advances in Neural Information Processing Systems, pp. 8114–8124. Cited by: §1, Table 1, §4, Remark, Remark.
  • A. Rahimi and B. Recht (2008) Random features for large-scale kernel machines. In Advances in neural information processing systems, pp. 1177–1184. Cited by: §1.
  • P. Rigollet (2007) Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research 8 (Jul), pp. 1369–1392. Cited by: §1.
  • Y. Roh, G. Heo, and S. E. Whang (2019) A survey on data collection for machine learning: a big data-ai integration perspective. IEEE Transactions on Knowledge and Data Engineering. Cited by: §1.
  • L. Rosasco and S. Villa (2015) Learning with incremental iterative regularization. In Advances in Neural Information Processing Systems, pp. 1630–1638. Cited by: §1, §1.
  • A. Rudi and L. Rosasco (2017) Generalization properties of learning with random features. In Advances in Neural Information Processing Systems, pp. 3215–3225. Cited by: Appendix A, Appendix A, Lemma E.2, §1, Table 1, §1, Lemma 6.2, footnote 4, footnote 5.
  • B. Schölkopf, A. J. Smola, F. Bach, et al. (2002) Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press. Cited by: §1, §3.
  • O. Sener and S. Savarese (2017)

    Active learning for convolutional neural networks: a core-set approach

    .
    arXiv preprint arXiv:1708.00489. Cited by: §1.
  • B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §1, §1.
  • I. Steinwart, D. R. Hush, C. Scovel, et al. (2009) Optimal rates for regularized least squares regression.. In COLT, pp. 79–93. Cited by: §1, §1.
  • V. Vapnik and V. Vapnik (1998) Statistical learning theory wiley. New York 1. Cited by: §1.
  • L. Wasserman and J. D. Lafferty (2008) Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems, pp. 801–808. Cited by: §1.
  • K. Wei, R. Iyer, and J. Bilmes (2015) Submodularity in data subset selection and active learning. In International Conference on Machine Learning, pp. 1954–1963. Cited by: §1.
  • H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §7.
  • Y. Yao, L. Rosasco, and A. Caponnetto (2007) On early stopping in gradient descent learning. Constructive Approximation 26 (2), pp. 289–315. Cited by: §1.
  • K. Yu, J. Bi, and V. Tresp (2006) Active learning via transductive experimental design. In Proceedings of the 23rd international conference on Machine learning, pp. 1081–1088. Cited by: §1.
  • T. Zhang (2005) Learning bounds for kernel regression using effective data dimensionality. Neural Computation 17 (9), pp. 2077–2098. Cited by: §3.
  • P. Zhao and T. Zhang (2015) Stochastic optimization with importance sampling for regularized loss minimization. In international conference on machine learning, pp. 1–9. Cited by: §1.

Appendix A Auxiliary Results

First we introduce GD path on the excess risk:

with for .

Lemma A.1 (Proposition 2 and Extension of Lemma 16 in [Lin and Rosasco, 2017]).

Suppose that Assumptions 1 and 2 hold. Let be sufficiently small. Then, for any ,

Moreover for any and

Lemma A.2.

Suppose that Assumptions 1 and 3 hold. For any ,

Proof.

. Observe that . This finishes the proof. ∎

Lemma A.3.

Suppose that Assumption 1 holds. For any ,

Proof.

From Assumptions 1, we immediately obtain the claim. ∎

Lemma A.4 (Spectral filters).

Let for and . Also we define and . Then the following inequalities hold:

for any and

for any and .

Proof.

When , the inequalities always hold and so we assume . Note that . The first inequality is trivial because . We show the second inequality. Note that . Observe that from elemental calculus, function for is maximized at and has maximum value . This finishes the proof. ∎

Recall that

for . will be defined later (see Definition 4.1). Then we define , where is uniformly at random on .

Lemma A.5.

Suppose that Assumption 1. Let and . Suppose that . When , with probability at least