Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels

05/21/2020 ∙ by Weinan E, et al. ∙ Princeton University 22

We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor L^2-approximators for the class of two-layer neural networks in high dimension, and that two-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the L^2-topology.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

It has been known since the early 1990s that two-layer neural networks with sigmoidal or ReLU activation can approximate arbitrary continuous functions on compact sets in the uniform topology

[Cyb89, Hor91]. In fact, when approximating a suitable (infinite-dimensional) class of functions in the topology of any

compactly supported Radon probability measure, two-layer networks can evade the curse of dimensionality

[Bar93]. In this article, we show that

  1. infinitely wide random feature functions with norm bounds are much worse approximators in high dimension compared to two-layer neural networks.

  2. infinitely wide neural networks are subject to the curse of dimensionality when approximating general Lipschitz functions in high dimension.

In both cases, we consider approximation in the -topology. Both statements apply more generally. In the first point, we can consider more general kernel methods instead of random features (including certain neural tangent kernels), and the second claim also holds true for deep ResNets of bounded width. We conjecture that Lipschitz functions in the second statement could be replaced with functions for fixed . Precise statements of the results are given in Corollary 3.4 and Example 4.3.

To prove these results, we show more generally that if are subspaces of a Banach space and a sequence of linear maps converges quickly to a limit on , but not on , then there must be a Kolmogorov width-type separation between and . The classical notion of Kolmogorov width is considered in Lemma 2.1 and later extended to a stronger notion of separation in Lemma 2.3.

We apply the abstract result to the pairs Barron space (for two-layer networks)/ Lipschitz space, and RKHS/

Barron space. In the first case, the sequence of linear maps is given by a type of Monte-Carlo integration, in the second case by projection onto the eigenspaces of the RKHS kernel.

This article is structured as follows. In Section 2, we prove the abstract result which we apply to Barron and Lipschitz space in Section 3 and to RKHS and Barron space in Section 4. We conclude by discussing our results and some open questions in Section 5. In appendices A and B, we review the natural function spaces for shallow neural networks and kernel methods respectively. In Appendix B, we specifically focus on kernels arising from random feature models and neural tangent kernels for two-layer neural networks.

1.1. Notation

We denote the closed ball of radius around the origin in a Banach space by and the unit ball by . The space of continuous linear maps between Banach spaces is denoted by and the continuous dual space of by .

2. An Abstract Lemma

2.1. Kolmogorov Width Version

The Kolmogorov width of a function class in another function class with respect to a metric on the union of both classes is defined as the biggest distance of an element in from the class :

In this article, we consider the case where is the unit ball in a Banach space , is the ball of radius in a Banach space and is induced by the norm on a Banach space into which both and embed densely. As increases, points in are approximated to higher degrees of accuracy by elements of . The rate of decay

provides a quantitative measure of density of in with respect to the topology of . For a different point of view on width focusing on approximation by finite-dimensional spaces, see [Lor66, Chapter 9].

In the following Lemma, we show that if there exists a sequence of linear operators on which behaves sufficiently differently on and , then must decay slowly as .

Lemma 2.1.

Let be Banach spaces such that . Assume that are continuous linear operators such that

for and constants . Then




Choose a sequence such and such that

(see Remark 2.2). Then

We therefore have

Clearly since . For general , take . Then

As , so does , and the -dependent term converges to . ∎

Remark 2.2.

Generally elements like may not exist if the extremum is not attained. Otherwise, we can choose such that is sufficiently close to its infimum and is sufficiently close to its supremum. To simplify our presentation, we assume that the supremum and infimum are attained.

The choice of as a minimizer is valid if

  1. embeds into compactly, so the minimum of the continuous function is attained on the compact set , or

  2. the embedding maps closed bounded sets to closed sets and admits continuous projections onto closed convex sets (for example, is uniformly convex).

In the applications below, the first condition will be met.

2.2. Improved Estimate

In the previous section, we have shown by elementary means that the estimate

holds for suitable if a sequence of linear maps between and another Banach space behaves very differently on subspaces and of . So intuitively, on each scale there exists an element such that is poorly approximable by elements in on this scale. In this section, we establish that there exists a single point

which is poorly approximable across infinitely many scales. This statement has applications in Wasserstein gradient flows for machine learning which we discuss in a companion article


Lemma 2.3.

Let be Banach spaces such that . Assume that are operators such that

for and constants . Then there exists such that for every we have

The result is stronger than the previous one in that it fixes a single point which is poorly approximable in infinitely many scales . While in each scale there exists a point which is poorly approximable, we only show that is poorly approximable in infinitely many scales, not in all scales.

Proof of Lemma 2.3.

Since , there exists a constant such that .

Definition of . Choose sequences and such

Consider two sequences of strictly increasing integers such that

We will impose further conditions below. Set

where the signs are chosen inductively such that


To shorten notation, define and note that the estimates for transfer to . If we have

and similarly if we obtain

Slow approximation rate. Choose


Since was chosen precisely such that

we obtain that


For this lower bound to be meaningful, the first term in the bracket has to dominate the second term. We specify the scaling relationship between and as

In this definition, is not typically an integer unless is an integer (or, to hold for a subsequence, rational). In the general case, we choose the integer closest to . To simplify the presentation, we proceed with the non-integer and note that the results are insensitive to perturbations of order .

We obtain

In particular, note that as . In order for

to be small, we need to grow super-exponentially. Note that since . We specify and compute

for large enough . Thus we can neglect the negative term on the left hand side of (2.3) at the price of a slightly smaller constant. Thus

Finally, we conclude that for all we have

3. Approximating Lipschitz Functions by Functions of Low Complexity

In this section, we apply Lemma 2.3 to the situation where general Lipschitz functions are approximated by functions in a space with much lower complexity. Examples include function spaces for infinitely wide neural networks with a single hidden layer and spaces for deep ResNets of bounded width. For simplicity, we first consider uniform approximation and then modify the ideas to also cover -approximation.

3.1. Approximation in

Consider the case where

  1. is the space of continuous functions on the unit cube with the norm

  2. is the space of Lipschitz-continuous functions with the norm

  3. is a Banach space of functions such that

    • embeds continuously into ,

    • the Monte-Carlo estimate


Examples of admissible spaces for are Barron space for two-layer ReLU networks and the compositional function space for deep ReLU ResNets of finite width, see [EMW19a, EMW18, EMW19b]. A brief review of Barron space is provided in Appendix A. The Monte-Carlo estimate is proved by estimating the Rademacher complexity of the unit ball in the respective function space. For Barron space, and for compositional function space , see [EMW19b, Theorems 6 and 12].

We observe the following: If

is a vector of iid random variables sampled from the uniform distribution on

, then

is the -Wasserstein distance between -dimensional Lebesgue measure on the cube and the empirical measure generated by the random points – see [Vil08, Chapter 5] for further details on Wasserstein distances and the link between Lipschitz functions and optimal transport theory. The distance on for which the Wasserstein transportation cost is computed is the same for which is -Lipschitz.

Empirical measures converge to the underlying distribution slowly in high dimension [FG15], by which we mean that

for some dimension-dependent constant . Observe that also

where is a diameter of the -dimensional unit cube with respect to the norm for which is -Lipschitz. Here we used that replacing by for does not change the difference of the two expectations, and that on the space of functions with the equivalence

holds. By we denote the Lebesgue measure of the unit ball in with respect to the correct norm.

Lemma 3.1.

For every we can choose points in such that



First, we prove the following. Claim: Let be any collection of points in . Then

Proof of claim: Choose and consider the set

We observe that

So any transport plan between and the empirical measure needs to transport mass by a distance of at least . We conclude that

The infimum is attained when

This concludes the proof of the claim.

Proof of the Lemma: Using the claim, any points such that

satisfy the conditions. ∎

For any , we fix such a collection of points and define


Thus we can apply Lemma 2.3 with

Corollary 3.2.

There exists a -Lipschitz function on such that

for all .

3.2. Approximation in

Point evaluation functionals are no longer well defined if we choose . We therefore need to replace by functionals of the type

for sample points and find a balance between the radii shrinking too fast (causing the norms to blow up) and shrinking too slowly (leading to better approximation properties on Lipschitz functions).

We interpret as the unit cube for function spaces, but as a -dimensional flat torus when considering balls. Namely the ball in is to be understood as projection of the ball of radius around on onto . This allows us to avoid boundary effects.

Lemma 3.3.

For every we can choose points in such that the estimates

hold. are dimension dependent constants and

for a dimension-dependent .

Proof of Lemma 3.3.

-estimate. In all of the following, we rely on the interpretation of balls as periodic to avoid boundary effects. For a sample denote

Observe that

We compute


It is easy to see that



is a dimension-dependent constant. Thus combining (3.1) and (3.2) we find that

This allows us to estimate