References

[1]
Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal.
Reconciling modern machinelearning practice and the classical bias–variance tradeoff.
Proceedings of the National Academy of Sciences, 116(32):15849–15854, 2019.  [2] F Vallet, JG Cailton, and Ph Refregier. Linear and nonlinear extension of the pseudoinverse solution for learning boolean functions. Europhysics Letters, 9(4):315, 1989.
 [3] Roger Penrose. On best approximate solutions of linear matrix equations. Mathematical Proceedings of the Cambridge Philosophical Society, 52(1):17–19, 1956.
 [4] M Opper, W Kinzel, J Kleinz, and R Nehl. On the ability of the optimal perceptron to generalise. Journal of Physics A: Mathematical and General, 23(11):L581, 1990.
 [5] Timothy LH Watkin, Albrecht Rau, and Michael Biehl. The statistical mechanics of learning a rule. Reviews of Modern Physics, 65(2):499, 1993.

[6]
Robert P W Duin.
Classifiers in almost empty spaces.
In
Proceedings of the 15th International Conference on Pattern Recognition
, volume 2, pages 1–7. IEEE, 2000.  [7] Marina Skurichina and R P W Duin. Regularization by adding redundant features. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 564–572. Springer, 1998.

[8]
Jesse H Krijthe and Marco Loog.
The peaking phenomenon in semisupervised learning.
In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 299–309. Springer, 2016.  [9] Š Raudys and R P W Duin. Expected classification error of the fisher linear classifier with pseudoinverse covariance matrix. Pattern Recognition Letters, 19(56):385–392, 1998.
 [10] Marco Loog, Tom Viering, and Alexander Mey. Minimizers of the empirical risk and risk monotonicity. In Advances in Neural Information Processing Systems, pages 7476–7485, 2019.
Comments
There are no comments yet.