Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels

05/21/2020
by   Weinan E, et al.
22

We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor L^2-approximators for the class of two-layer neural networks in high dimension, and that two-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the L^2-topology.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset