1 Introduction
Since Barron’s seminal article barron1993universal , artificial neural networks have been celebrated as a tool to beat the curse of dimensionality. Barron proved that twolayer neural networks with neurons and suitable nonlinear activation can approximate a large (infinitedimensional) class of functions to within an error of order in
for any Radon probability measure
on independently of dimension , while any sequence of linear function spaces with suffers from the curse of dimensionality if the data distribution is truly highdimensional. More specificallyfor a universal constant if is the uniform measure on and describes the same function class that is approximated well by neural networks with parameters and denotes its natural norm. Thus from the perspective of approximation theory, neural networks leave linear approximation in the dust in high dimensions.
The perspective of approximation theory only establishes the existence of neural networks which approximate a given target function well in some sense, while in applications, it is important to find optimal (or at least reasonably good) parameter values for the network. The most common approach is to initialize the parameters randomly and optimize them by a gradientdescent based method. We focus on the case where the goal is to approximate a target function in for some Radon probability measure on . To optimize the parameters of twolayer network
we therefore let evolve by the gradient flow of the risk functional
In practice, we only have access to data sampled from an unknown underlying distribution . The approximation therefore takes place in instead of where is the empirical measure of the data samples. If all data points are sampled independently, the empirical measures converge to the underlying distribution
. In this article, we focus on uniform estimates in the number of data samples and population risk.
While the optimization problem is nonconvex, gradient flowbased optimization works astonishingly well in applications. The mechanism behind this is not fully understood. In certain scaling regimes in the number of parameters and the number of data points , the empirical risk has been shown to decay exponentially (with high probability over the initialization), even when the target function values are chosen randomly in a bounded interval du2018gradient ; weinan2019comparative .
Networks which easily fit random data can be trusted to have questionable generalization properties. Even at initialization, network parameters are often chosen too large to retain reasonable control of the path norm, which controls the generalization error. This allows the network to fit any data sample with minimal change in the parameters, behaving much like its linearization around the initial configuration (an infinitely wide random feature model), see weinan2019comparative . This approach explains how very wide twolayer networks behave, but it does not explain why neural networks are more powerful in applications than random feature models.
On the opposite side of the spectrum lies the mean field regime chizat2018global ; mei2018mean ; rotskoff2018neural ; sirignano2018mean . Under mean field scaling twolayer network with neurons and parameters is given as
Both concepts of neural network are equivalent from the perspective of approximation theory (static), but behave entirely differently under gradient descent training (dynamics), see e.g. chizat2018note . In the mean field regime, parameters may move a significant distance from their initialization, making use of the adaptive feature choice in neural networks compared to random feature models. This regime thus has greater potential to establish the superiority of artificial neural networks over kernel methods.
Mean field gradient flows do not resemble their linearization at the initial condition. The convergence of gradient descent training to minimizers of the often highly nonconvex loss functionals is therefore not obvious (and, for poorly chosen initial values, generally not true). Even if empirical and population risk decay to zero along the gradient flow, population risk may do so at very slow rates in high dimension.
Theorem 1.
Let
be a Lipschitzcontinuous activation function. Consider population and empirical risk expressed by the functionals
where is a Lipschitzcontinuous target function and the points
are iid samples from the uniform distribution on
. There exists with Lipschitz constant and norm bounded by such that parameters evolving by the gradient flow of either or itself satisfy for all .Intuitively, this means that the estimate is almost true. The result holds uniformly in and even for infinitely wide networks. An infinitely wide mean field twolayer network (or Barron function) is a function
where is a suitable Radon probability measure on . Networks of finite width are included in this definition by setting It has been observed (see e.g. (chizat2018global, , Proposition B.1)
) that the vectors
move by the usual gradient flow of if and only if the associated measure evolves by the timerescaled Wasserstein gradient flow ofWe show the following more general result which implies Theorem 1.
Theorem 2.
Let be a Lipschitzcontinuous activation function. Consider population and empirical risk expressed by the functionals
where is a Lipschitzcontinuous target function and the points are iid samples from the uniform distribution on . There exists with Lipschitz constant and norm bounded by such that parameter measures evolving by the Wasserstein gradient flow of either or satisfy
for all .
Theorem 2 provides a more general perspective than Theorem 1. The Wasserstein gradient flow of is given by the continuity equation
is the variational gradient of the risk functional. In particular, any other discretization of this PDE experiences the same curse of dimensionality phenomenon. Besides gradient descent training, this also captures stochastic gradient descent with large batch size and small time steps (to leading order). Viewing machine learning through the lens of classical numerical analysis may illuminate the large data and many parameter regime, see
E:2019aa .The article is structured as follows. In the remainder of the introduction, we discuss some previous works on related questions. In Section 2, we discuss Wasserstein gradient flows for meanfield twolayer neural networks and review a result from approximation theory. Next, we show in Section 3 that Wasserstein gradient flows for twolayer neural network training may experience a curse of dimensionality phenomenon. The analytical result is backed up by numerical evidence in Section 4. We conclude the article by discussing the significance of our result and related open problems in Section 5. In an appendix, we show that a similar phenomenon can be established when training an infinitely wide random feature model on a single neuron target function.
1.1 Previous Work
The study of mean field training for neural networks with a single hidden layer has been initiated independently in several works chizat2018global ; rotskoff2018neural ; sirignano2018mean ; mei2018mean . In chizat2018note , the authors compare mean field and classical training. chizat2018global ; Chizat:2020aa ; arbel2019maximum contain an analysis of whether gradient flows starting at a suitable initial condition converge to their global minimum. This analysis is extended to networks with ReLU activation in relutraining .
In hu2019mean , the authors consider a training algorithm where standard Gaussian noise is added to the parameter gradient of the risk functional. The evolution of network parameters is described by the Wasserstein gradient flow of an energy functional which combines the loss functional and an entropy regularization. In this case, the parameter distribution approaches the stationary measure of a Markov process as time approaches infinity, which is close to a minimizer of the mean field risk functional if noise is small. Note, however, that these results do not describe the small batch stochastic gradient descent algorithm used in practice, for which noise may be assumed to be Gaussian, but with a complicated parameterdependent covariance structure hu2019diffusion ; li2015dynamics .
Some results in chizat2018global
also apply to deeper structures with more than one hidden layer. However, the imposition of a linear structure implies that each neuron in the outer layer has its own set of parameters for the deeper layers. A mean field training theory for more realistic deep networks has been developed heuristically in
nguyen2019mean and rigorously in araujo2019mean ; nguyen2020rigorous ; sirignano2019mean under the assumption that the parameters in different layers are initialized independently. The distribution of parameters remains a product measure for positive time, so that crossinteractions with infinitely many particles in the following layer (as width approaches infinity) are replaced by ensemble averages. This ‘propagation of chaos’ is the key ingredient of the analysis.In abbe2018provable , the author takes a different approach to establish limitations of neural network models in machine learning, see also shamir2018distribution ; raz2018fast . Our approach is different in that we allow networks of infinite width and infinite amounts of data.
2 Background
2.1 Why Wasserstein?
Let us quickly summarize the rationale behind studying Wasserstein gradient flows of risk functionals. This section only serves as rough overview, see Chizat:2020aa for a more thorough introduction to Wasserstein gradient flows for machine learning and ambrosio2008gradient ; santambrogio2015optimal ; villani2008optimal for Wasserstein gradient flows and optimal transport in general.
Consider a general function class whose elements can be represented as normalized sums
of functions in a parameterized family . In the case of twolayer networks, and . If the activation function is , then for all . Thus is welldefined if
has finite second moments, i.e.
lies in the Wasserstein space . We consider the risk functionalfor some data distribution on . Note that if is compact and the class has the uniform approximation property on compact sets (by which we mean that the class is dense in for all ). This is the case for twolayer networks with nonpolynomial activation functions – see e.g. cybenko1989approximation ; hornik1991approximation for continuous sigmoidal activation functions. The same result holds for ReLU activation since is sigmoidal.
Lemma 3.
(chizat2018global, , Proposition B.1) The parameters evolve by the timeaccelerated gradient flow
of if and only if their distribution evolves by the Wasserstein gradient flow
The continuity equation describing the gradient flow is understood in the sense of distributions. By the equivalence in Lemma 3, all results below apply to networks with finitely many neurons as well as infinitely wide mean field networks. In this article, we do not concern ourselves with existence for the gradient flow equations. More details can be found in chizat2018global for general activation functions with a higher degree of smoothness and in relutraining for ReLU activation.
2.2 Growth of Second Moments
Denote the second moment of by A direct calculation establishes that , which implies the following.
Lemma 4.
(relutraining, , Lemma 3.3) If evolves by the Wassersteingradient flow of , then
Remark 5.
If is a priori known to decrease at a specific rate, a stronger result holds. Under the fairly restrictive assumption that
holds. In particular, if decays like in the convex case, the most natural decay assumption on the derivative is which corresponds to . Thus, in this case we expect the second moments of to blow up at most logarithmically, which agrees with the results of berlyand2018convergence .
2.3 Slow Approximation Results in High Dimension
In this section, we recall a result from highdimensional approximation theory. An infinitely wide twolayer network is a function
The choice of the parameter distribution for is nonunique since for all measures which are invariant under the coordinate reflection . For ReLU activation, further nonuniqueness stems from the fact that
The pathnorm or Barron norm of a function is the norm which measures the amount of distortion done to an input along any path which information takes through the network. Due to the nonuniqueness, it is defined as an infimum
The equality is understood in the almost everywhere sense for the data distribution . A more thorough introduction can be found in weinan2019lei ; E:2018ab or bach2017breaking , where a special instance of the same space is referred to as . Every ReLUBarron function is Lipschitz. In high dimensions, the opposite is far from true.
Theorem 6.
This means that
and a sequence of scales , i.e. in high dimension there are Lipschitz functions which are poorly approximated by Barron functions of low norm. The proof of Theorem is built on the observation that MonteCarlo integration converges uniformly on Lipschitzfunctions and Barron functions with very different rates, suggesting a scale separation.
3 A Dynamic Curse of Dimensionality
Proof of Theorem 2.
Remark 7.
The result can be improved under additional assumptions. Like in Remark 5, we assume that the difference quotients of risk satisfy for . Then grows noticeably slower than linearly. If is such that and decays to zero, we find that , so .
4 Numerical Results
For , we consider the associated twolayer network with ReLU activation
As risk functional we choose
The target function in our simulations is either
as an example of a Barron function (which can be represented with , and distributed uniformly on a sphere of radius , but not with finitely many neurons) or
as an example of a Lipschitz continuous, nonBarron target function. In both cases, we have
For a proof that is not a Barron function, see barron_new . In the first case, also the Barron norm of scales as . The offset from the origin is used to avoid spurious effects since the initial parameter distribution is symmetric around the origin. In simulations, we considered moderately wide networks with neurons. The parameters were initialized iid according to Gaussians with expectation
and variance
for , for , and as constants . They were optimized by (nonstochastic) gradient descent for an empirical risk functionalwith independent samples . Population risk was approximated by an empirical risk functional evaluated on independent samples. On the data samples, the mean and variance of the target functions were estimated in the range and respectively for all simulations.
In Figure 1 we see that both empirical and population risk decay very similarly for Barron target functions in any dimension, while the decay of risk becomes significantly slower in high dimension for target functions which are not Barron. The empirical decay rate
becomes smaller for fixed positive time and nonBarron target functions as , see Figure 3.
Training appears to proceed in two regimes for Barron target functions: An initial phase in which both the Barron norm and risk change rapidly, and a longer phase in which the risk decays gradually and the Barron norm remains roughly constant. In the initial ‘radial’ phase, the vector is subject to a strong radial force driving the parameters towards the origin or away from the origin exponentially fast. Since ReLU is positively homogeneous, we observe that
with a positively onehomogeneous right hand side. Thus while is close to its initialization ( due to symmetry in ), the vector moves towards the origin/away from the origin at an exponential rate, depending on the alignment of with . The exponential growth ceases as becomes sufficiently close to (in the weak topology).
After the initial strengthening of neurons which are generally aligned with the target function, we reach a more stable state. In the following ‘angular’ phase, the Barron norm remains constant and directional adjustments to parameters dominate over radial adjustments. Using Figures 1 and 2, we can easily spot the transition between the two training regimes at time .
The gap between empirical risk and population risk increases in high dimensions. When training the same networks on the same problems for empirical risk with only 4,000 data points, the results are very similar in dimension 30, but the empirical risk decays very quickly in dimension 250 while the population risk increases rather than decrease when considering a nonBarron target function. This is to be expected since a) the Wasserstein distance between Lebesgue measure and empirical measure increases and b) the number of trainable parameters increases with , making it easier to fit point values. The risk decays approximately like for Barron target functions with (faster in higher dimension). This is faster than expected for generic convex target functions.
5 Discussion
In this article, we have shown that in the mean field regime, training a twolayer neural network on empirical or population risk may not decrease population risk faster than for when the data distribution is truly dimensional, we consider loss, and the target function is merely Lipschitzcontinuous, but not in Barron space. The key ingredient of the result is the slow growth of path norms during gradient flow training and the observation that a Lipschitz function exists which is badly approximable in high dimension.
It is straightforward to extend the main result to general leastsquares minimization. All statements remain true if instead of ‘risk decays to zero’ we substitute ‘risk decays to minimum Bayes risk’.
5.1 Interpretation
The curse of dimensionality phenomenon occurs when the target function is not in Barron space, i.e. a minimizer does not exist. In this situation, even gradient flows of smooth convex functions in one dimension may be slow. The gradient flow ODE
is solved by The energy decays as If , the energy decay is extremely slow. Thus, it should be expected that curse of dimensionality phenomena can occur whenever the risk functional does not have a minimizer in the function space associated with the neural network model under consideration. The numerical evidence of Section 4 suggests that the slow decay phenomenon is visible also in empirical risk if the training sample is large enough (depending on the dimension).
5.2 Implications for Machine Learning Theory
Understanding function spaces associated to neural network architectures is of great practical importance. When a minimization problem does not admit a solution in a given function space, gradient descent training may be very slow in high dimension. Unlike the theory of function spaces typically used in lowdimensional problems of elasticity theory, fluid mechanics etc, no comprehensive theory of Banach spaces of neural networks is available except for very special cases E:2019aa ; weinan2019lei . In the light of our result, a convergence proof for mean field gradient descent training of twolayer neural networks must satisfy one of two criteria: It must assume the existence of a minimizer, or it must allow for slow convergence rates in high dimension.
References
 (AGS08) Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.
 (AKSG19) Michael Arbel, Anna Korba, Adil Salim, and Arthur Gretton. Maximum mean discrepancy gradient flow. In Advances in Neural Information Processing Systems, pages 6481–6491, 2019.
 (AOY19) Dyego Araújo, Roberto I Oliveira, and Daniel Yukimura. A meanfield limit for certain deep neural networks. arXiv:1906.00193 [math.ST], 2019.
 (AS18) Emmanuel Abbe and Colin Sandon. Provable limitations of deep learning. arXiv:1812.06369 [cs.LG], 2018.
 (Bac17) Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research, 18(1):629–681, 2017.

(Bar93)
Andrew R Barron.
Universal approximation bounds for superpositions of a sigmoidal function.
IEEE Transactions on Information theory, 39(3):930–945, 1993. 
(BJ18)
Leonid Berlyand and PierreEmmanuel Jabin.
On the convergence of formally diverging neural netbased classifiers.
Comptes Rendus Mathematique, 356(4):395–405, 2018.  (CB18a) Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv:1812.07956 [math.OC], 2018.
 (CB18b) Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for overparameterized models using optimal transport. In Advances in neural information processing systems, pages 3036–3046, 2018.
 (CB20) Lenaic Chizat and Francis Bach. Implicit bias of gradient descent for wide twolayer neural networks trained with the logistic loss. arxiv:2002.04486 [math.OC], 2020.
 (Cyb89) George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
 (DZPS18) Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes overparameterized neural networks. arXiv:1810.02054 [cs.LG], 2018.
 (EMW18) Weinan E, Chao Ma, and Lei Wu. A priori estimates of the population risk for twolayer neural networks. Comm. Math. Sci., 17(5):1407 – 1425 (2019), arxiv:1810.06397 [cs.LG] (2018).
 (EMW19a) Weinan E, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for neural network models. arXiv:1906.08039 [cs.LG], 2019.
 (EMW19b) Weinan E, Chao Ma, and Lei Wu. Machine learning from a continuous viewpoint. arxiv:1912.12777 [math.NA], 2019.
 (EMW19c) Weinan E, Chao Ma, and Lei Wu. A comparative analysis of optimization and generalization properties of twolayer neural network and random feature models under gradient descent dynamics. Sci. China Math., https://doi.org/10.1007/s1142501916285, arXiv:1904.04326 [cs.LG] (2019).
 (EW20a) Weinan E and Stephan Wojtowytsch. Barron functions and their representation. In preparation, 2020.
 (EW20b) Weinan E and Stephan Wojtowytsch. Kolmogorov width decay and poor approximators in machine learning: Shallow neural networks, random feature models and neural tangent kernels. In preparation, 2020.
 (HLLL19) Wenqing Hu, Chris Junchi Li, Lei Li, and JianGuo Liu. On the diffusion approximation of nonconvex stochastic gradient descent. Annals of Mathematical Sciences and Applications, 4(1):3–32, 2019.
 (Hor91) Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
 (HRSS19) Kaitong Hu, Zhenjie Ren, David Siska, and Lukasz Szpruch. Meanfield Langevin dynamics and energy landscape of neural networks. arXiv:1905.07769 [math.PR], 2019.
 (LTE15) Qianxiao Li, Cheng Tai, and Weinan E. Dynamics of stochastic gradient algorithms. arXiv:1511.06251 [cs.LG], 2015.
 (MMN18) Song Mei, Andrea Montanari, and PhanMinh Nguyen. A mean field view of the landscape of twolayer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.
 (Ngu19) PhanMinh Nguyen. Mean field limit of the learning dynamics of multilayer neural networks. arXiv:1902.02880 [cs.LG], 2019.
 (NP20) PhanMinh Nguyen and Huy Tuan Pham. A rigorous framework for the mean field limit of multilayer neural networks. arXiv:2001.11443 [cs.LG], 2020.
 (Raz18) Ran Raz. Fast learning requires good memory: A timespace lower bound for parity learning. Journal of the ACM (JACM), 66(1):1–18, 2018.
 (RVE18) Grant M Rotskoff and Eric VandenEijnden. Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error. arXiv:1805.00915 [stat.ML], 2018.
 (San15) Filippo Santambrogio. Optimal transport for applied mathematicians. Birkhäuser, NY, 55(5863):94, 2015.
 (Sha18) Ohad Shamir. Distributionspecific hardness of learning neural networks. The Journal of Machine Learning Research, 19(1):1135–1163, 2018.
 (SS19) Justin Sirignano and Konstantinos Spiliopoulos. Mean field analysis of deep neural networks. arXiv:1903.04440 [math.PR], 2019.

(SS20)
Justin Sirignano and Konstantinos Spiliopoulos.
Mean field analysis of neural networks: A law of large numbers.
SIAM J. Appl. Math, 80(2):725–752, 2020.  (Vil08) Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
 (Woj20) Stephan Wojtowytsch. On the global convergence of gradient descent training for twolayer Relu networks in the mean field regime. In preparation, 2020.
Appendix A Random Features and Shallow Neural Networks
Lemma 4 applies to general models with an underlying linear structure, in particular random feature models. Both twolayer neural networks and random feature models have the form but in random feature models, is fixed at the (random) initialization. An infinitely wide random feature model is described by
where is a fixed distribution (usually spherical or standard Gaussian) while an infinitely wide twolayer neural network is described by
(approximationarticle, , Example 4.3) establishes a Kolmogorovwidth type separation between random feature models and twolayer neural networks of similar form as the separation between twolayer neural networks and Lipschitz functions. Thus a curse of dimensionality also affects the training of infinitely wide random feature models when the target function is a generic Barron function. If is a smooth omnidirectional distribution and is a single neuron activation, then must concentrate a large amount of mass, forcing to blow up. In higher dimension, the blowup is more pronounced since small balls on the sphere around have faster decaying volume.
We train a twolayer neural network and a random feature model with gradient descent to approximate the single neuron activation in . Both models have width . Empirical risk is calculated using independent data samples and population risk is approximated using
data samples. Both networks are intialized according to a Gaussian distribution as above.
Comments
There are no comments yet.