It has been known since the early 1990s that two-layer neural networks with sigmoidal or ReLU activation can approximate arbitrary continuous functions on compact sets in the uniform topology[Cyb89, Hor91]. In fact, when approximating a suitable (infinite-dimensional) class of functions in the topology of any
compactly supported Radon probability measure, two-layer networks can evade the curse of dimensionality[Bar93]. In this article, we show that
infinitely wide random feature functions with norm bounds are much worse approximators in high dimension compared to two-layer neural networks.
infinitely wide neural networks are subject to the curse of dimensionality when approximating general Lipschitz functions in high dimension.
In both cases, we consider approximation in the -topology. Both statements apply more generally. In the first point, we can consider more general kernel methods instead of random features (including certain neural tangent kernels), and the second claim also holds true for deep ResNets of bounded width. We conjecture that Lipschitz functions in the second statement could be replaced with functions for fixed . Precise statements of the results are given in Corollary 3.4 and Example 4.3.
To prove these results, we show more generally that if are subspaces of a Banach space and a sequence of linear maps converges quickly to a limit on , but not on , then there must be a Kolmogorov width-type separation between and . The classical notion of Kolmogorov width is considered in Lemma 2.1 and later extended to a stronger notion of separation in Lemma 2.3.
We apply the abstract result to the pairs Barron space (for two-layer networks)/ Lipschitz space, and RKHS/
Barron space. In the first case, the sequence of linear maps is given by a type of Monte-Carlo integration, in the second case by projection onto the eigenspaces of the RKHS kernel.
This article is structured as follows. In Section 2, we prove the abstract result which we apply to Barron and Lipschitz space in Section 3 and to RKHS and Barron space in Section 4. We conclude by discussing our results and some open questions in Section 5. In appendices A and B, we review the natural function spaces for shallow neural networks and kernel methods respectively. In Appendix B, we specifically focus on kernels arising from random feature models and neural tangent kernels for two-layer neural networks.
We denote the closed ball of radius around the origin in a Banach space by and the unit ball by . The space of continuous linear maps between Banach spaces is denoted by and the continuous dual space of by .
2. An Abstract Lemma
2.1. Kolmogorov Width Version
The Kolmogorov width of a function class in another function class with respect to a metric on the union of both classes is defined as the biggest distance of an element in from the class :
In this article, we consider the case where is the unit ball in a Banach space , is the ball of radius in a Banach space and is induced by the norm on a Banach space into which both and embed densely. As increases, points in are approximated to higher degrees of accuracy by elements of . The rate of decay
provides a quantitative measure of density of in with respect to the topology of . For a different point of view on width focusing on approximation by finite-dimensional spaces, see [Lor66, Chapter 9].
In the following Lemma, we show that if there exists a sequence of linear operators on which behaves sufficiently differently on and , then must decay slowly as .
Let be Banach spaces such that . Assume that are continuous linear operators such that
for and constants . Then
Choose a sequence such and such that
(see Remark 2.2). Then
We therefore have
Clearly since . For general , take . Then
As , so does , and the -dependent term converges to . ∎
Generally elements like may not exist if the extremum is not attained. Otherwise, we can choose such that is sufficiently close to its infimum and is sufficiently close to its supremum. To simplify our presentation, we assume that the supremum and infimum are attained.
The choice of as a minimizer is valid if
embeds into compactly, so the minimum of the continuous function is attained on the compact set , or
the embedding maps closed bounded sets to closed sets and admits continuous projections onto closed convex sets (for example, is uniformly convex).
In the applications below, the first condition will be met.
2.2. Improved Estimate
In the previous section, we have shown by elementary means that the estimate
holds for suitable if a sequence of linear maps between and another Banach space behaves very differently on subspaces and of . So intuitively, on each scale there exists an element such that is poorly approximable by elements in on this scale. In this section, we establish that there exists a single point
which is poorly approximable across infinitely many scales. This statement has applications in Wasserstein gradient flows for machine learning which we discuss in a companion article[WE20].
Let be Banach spaces such that . Assume that are operators such that
for and constants . Then there exists such that for every we have
The result is stronger than the previous one in that it fixes a single point which is poorly approximable in infinitely many scales . While in each scale there exists a point which is poorly approximable, we only show that is poorly approximable in infinitely many scales, not in all scales.
Proof of Lemma 2.3.
Since , there exists a constant such that .
Definition of . Choose sequences and such
Consider two sequences of strictly increasing integers such that
We will impose further conditions below. Set
where the signs are chosen inductively such that
To shorten notation, define and note that the estimates for transfer to . If we have
and similarly if we obtain
Slow approximation rate. Choose
Since was chosen precisely such that
we obtain that
For this lower bound to be meaningful, the first term in the bracket has to dominate the second term. We specify the scaling relationship between and as
In this definition, is not typically an integer unless is an integer (or, to hold for a subsequence, rational). In the general case, we choose the integer closest to . To simplify the presentation, we proceed with the non-integer and note that the results are insensitive to perturbations of order .
In particular, note that as . In order for
to be small, we need to grow super-exponentially. Note that since . We specify and compute
for large enough . Thus we can neglect the negative term on the left hand side of (2.3) at the price of a slightly smaller constant. Thus
Finally, we conclude that for all we have
3. Approximating Lipschitz Functions by Functions of Low Complexity
In this section, we apply Lemma 2.3 to the situation where general Lipschitz functions are approximated by functions in a space with much lower complexity. Examples include function spaces for infinitely wide neural networks with a single hidden layer and spaces for deep ResNets of bounded width. For simplicity, we first consider uniform approximation and then modify the ideas to also cover -approximation.
3.1. Approximation in
Consider the case where
is the space of continuous functions on the unit cube with the norm
is the space of Lipschitz-continuous functions with the norm
is a Banach space of functions such that
embeds continuously into ,
the Monte-Carlo estimate
Examples of admissible spaces for are Barron space for two-layer ReLU networks and the compositional function space for deep ReLU ResNets of finite width, see [EMW19a, EMW18, EMW19b]. A brief review of Barron space is provided in Appendix A. The Monte-Carlo estimate is proved by estimating the Rademacher complexity of the unit ball in the respective function space. For Barron space, and for compositional function space , see [EMW19b, Theorems 6 and 12].
We observe the following: If, then
is the -Wasserstein distance between -dimensional Lebesgue measure on the cube and the empirical measure generated by the random points – see [Vil08, Chapter 5] for further details on Wasserstein distances and the link between Lipschitz functions and optimal transport theory. The distance on for which the Wasserstein transportation cost is computed is the same for which is -Lipschitz.
Empirical measures converge to the underlying distribution slowly in high dimension [FG15], by which we mean that
for some dimension-dependent constant . Observe that also
where is a diameter of the -dimensional unit cube with respect to the norm for which is -Lipschitz. Here we used that replacing by for does not change the difference of the two expectations, and that on the space of functions with the equivalence
holds. By we denote the Lebesgue measure of the unit ball in with respect to the correct norm.
For every we can choose points in such that
First, we prove the following. Claim: Let be any collection of points in . Then
Proof of claim: Choose and consider the set
We observe that
So any transport plan between and the empirical measure needs to transport mass by a distance of at least . We conclude that
The infimum is attained when
This concludes the proof of the claim.
Proof of the Lemma: Using the claim, any points such that
satisfy the conditions. ∎
For any , we fix such a collection of points and define
Thus we can apply Lemma 2.3 with
There exists a -Lipschitz function on such that
for all .
3.2. Approximation in
Point evaluation functionals are no longer well defined if we choose . We therefore need to replace by functionals of the type
for sample points and find a balance between the radii shrinking too fast (causing the norms to blow up) and shrinking too slowly (leading to better approximation properties on Lipschitz functions).
We interpret as the unit cube for function spaces, but as a -dimensional flat torus when considering balls. Namely the ball in is to be understood as projection of the ball of radius around on onto . This allows us to avoid boundary effects.
For every we can choose points in such that the estimates
hold. are dimension dependent constants and
for a dimension-dependent .
Proof of Lemma 3.3.
-estimate. In all of the following, we rely on the interpretation of balls as periodic to avoid boundary effects. For a sample denote
It is easy to see that
This allows us to estimate