1 Introduction
As a consequence of the universal approximation theorems, sufficiently wide single layer neural networks are expressive enough to accurately represent a broad class of functions [Cyb89, Bar93, PS91]. The existence of a neural network function arbitrarily close to a given target function, however, is not a guarantee that any particular optimization procedure can identify the optimal parameters. Recently, using mathematical tools from optimal transport theory and interacting particle systems, it was shown that gradient descent [RVE18b, MMN18, SS18, CB18b]
and stochastic gradient descent converge asymptotically to the target function in the large data limit.
This analysis relies on taking a “meanfield” limit in which the number of parameters tends to infinity. In this setting, gradient descent optimization dynamics is described by a partial differential equation (PDE), corresponding to a Wasserstein gradient flow on a convex energy functional. While this PDE provides a powerful conceptual framework for analyzing the properties of neural networks evolving under gradient descent dynamics, the formula confers few immediate practical advantages. Nevertheless, analysis of this Wasserstein gradient flow motivates the interesting possibility of altering the dynamics to accelerate convergence.
In this work, we propose a dynamical scheme involving a parameter birthdeath process. It can be defined on systems of interacting (e.g., neural network optimization) or noninteracting particles. We prove that the resulting modified transport equation converges to the global minimum of the loss in both interacting and noninteracting regimes (under appropriate assumptions), and we provide an explicit rate of convergence in the latter case for the meanfield limit. Interestingly—and unlike the gradient flow—the only
fixed point of the dynamics we introduce is the global minimum of the loss function. We study the fluctuations of finiteparticle dynamics around this meanfield convergent solution, showing that they are of the same order throughout the dynamics and therefore providing algorithmic guarantees directly applicable to finite singlelayer neural network optimization. Finally, we derive algorithms that converge to the birthdeath PDEs and verify numerically that these schemes accelerate convergence even for finite numbers of parameters.
Summarily, we describe:
Global convergence and monotonicity of the energy with birthdeath dynamics — We propose in Section 3 two distinct modifications of the original gradient flow that can be interpreted as birthdeath processes. In this sense, the processes we describe amount to nonlocal mass transport in the equation governing the parameter distribution. We prove that the schemes we introduce increase the rate of contraction of the energy compared to gradient descent and stochastic gradient descent for fixed , and derive asymptotic rates of convergence (Section 4).
Analysis of fluctuations and quenching protocol — The birthdeath dynamics introduces additional fluctuations that are not present in gradient descent dynamics. In Section 5 we calculate these fluctuations using tools for measurevalued Markov processes and propose a simple quenching procedure to control fluctuations and ensure longtime convergence.
Algorithms for realizing the birthdeath schemes — In Section 6 we detail numerical schemes (and provide implementations in PyTorch) of the birthdeath schemes described below. The computational cost of implementing our procedure is minimal because no additional gradient computations are required. We demonstrate the efficacy of these algorithms on simple, illustrative examples in Section 7.
2 Related Works
Nonlocal update rules appear in various areas of machine learning and optimization. Derivativefree optimization
[RS13]offers a general framework for optimizing complex nonconvex functions using nonlocal search heuristics. Some notable examples include Particle Swarm Optimization
[Ken11] and Evolutionary Strategies, such as the Covariance Matrix Adaptation method [Han06]. These approaches have found some renewed interest in the optimization of neural networks in the context of Reinforcement Learning
[SHC17, SMC17]and hyperparameter optimization
[JDO17].Our setup of noninteracting potentials is closely related to the socalled Estimation of Distribution Algorithms
[BC95, LL01], which define update rules for a probability distribution over a search space by querying the values of a given function to be optimized. In particular, Information Geometric Optimization Algorithms
[OAAH17]study the dynamics of parametric densities using ordinary differential equations, focusing on invariance properties. In contrast, our focus in on the combination of transport (gradientbased) and birthdeath dynamics.
Dropout [SHK14] is a regularization technique popularized by the AlexNet CNN [KSH12] reminiscent of a birthdeath process, but we note that its mechanism is very different: rather than killing a neuron and replacing it by a new one with some rate, Dropout momentarily masks neurons, which become active again at the same position; in other words, Dropout implements a purely local transport scheme, as opposed to our nonlocal dynamics.
Finally, closest to our motivation is [WLLM18], who, building on the recent body of works that leverage optimaltransport techniques to study optimization in the large parameter limit [RVE18a, CB18b, MMN18, SS18], proposed a modification of the dynamics that replaced traditional stochastic noise by a resampling of a fraction of neurons from a base, fixed measure. Our model has significant differences to this scheme, namely we show that the dynamics preserves the same global minimizers and accelerates the rate of convergence.
3 Meanfield PDE and BirthDeath Dynamics
3.1 MeanField Limit and Liouville dynamics
Gradient descent propagates the parameters locally in proportion to the gradient of the objective function. In some cases, an optimization algorithm can benefit from nonlocal dynamics, for example, by allowing new parameters to appear at favorable values and existing parameters to be removed if they diminish the quality of the representation. In order to exploit a nonlocal dynamical scheme, it is useful to interpret the parameters as a system of particles, dimensional differentiable manifold, , evolving on a landscape determined by the objective function . Here we will focus on situations where the objective function may involve interactions between pairs of parameters:
(1) 
where is a single particle energy function and is a symmetric semipositive definite interaction kernel. Interestingly, optimizing neural networks with the meansquared loss function fits precisely this framework [RVE18b, MMN18, CB18b]
. Consider a supervised learning problem using a neural network with nonlinearity
. If we write the neural network as(2) 
and expand the loss function,
(3) 
we see that, up to an irrelevant constant depending only on the data distribution, we arrive at (1) with
(4) 
and,
(5) 
We also consider noninteracting objective functions in which in (1). Optimization problems that fit this framework include resource allocation tasks in which, e.g., weak performers are eliminated, Evolution Strategies, and Information Geometric Optimization [OAAH17].
In the case of gradient descent dynamics, the evolution of the particles is governed for by
(6) 
To analyze the dynamics of this particle system, we consider the “meanfield” limit . As the number of particles becomes large, the empirical distribution of particles
(7) 
leads to a deterministic partial differential equation at first order [RVE18b, MMN18, CB18b, SS18],
(8) 
where is the weak limit of and is some distribution from which the initial particle positions are drawn independently. The potential is specified by the objective function as
(9) 
and (8) should be interpreted in the weak sense in general:
(10) 
where denotes the space of smooth functions with compact support on .
Interestingly, is the gradient with respect to of an energy functional ,
(11) 
As a result, the nonlinear Liouville equation (8) is the Wasserstein gradient flow with respect to the energy functional . Local minima of (where ) are clearly fixed points of this gradient flow, but these fixed points may not always be minimizers of the energy when . When the initial distribution of parameters has full support, neural networks evolving with gradient descent avoid these spurious fixed points under appropriate assumptions about their nonlinearity [CB18b, RVE18b, MMN18].
3.2 BirthDeath augmented Dynamics
Here we consider a more general dynamical scheme that involves nonlocal transport of particle mass; remarkably, this dynamics avoids spurious fixed points and local minima, and converges asymptotically to the global minimum. Consider the following modification of the Wasserstein gradient flow above:
(12) 
The additional term is a birthdeath term that modifies the mass of . If is positive, this mass will decrease, corresponding to the removal or “death” of parameters. If is negative, this mass will increase, which can be implemented as duplication or “cloning” of parameters. For a finite number of parameters, this dynamics could lead to changes in the architecture of the network. In many applications it is preferable to fix the total population, achieved by simply adding a conservation term to the dynamics,
(13) 
where .
The dynamics defined by (13) are welldefined in the space of measures (i.e., they preserve the mass and the positivity of ), and the birthdeath terms improve the rate of energy decay, as shown by the following proposition:
Proposition 3.1
Let be a solution of (13) for the initial condition , the space of probability measures on . Then, for all , and satisfies
(14) 
Proof: Both statements are direct consequences of the definition, and are obtained by using and respectively as test functions in (13), together with .
The birthdeath term thus contributes to increase the rate of decay of the energy at all times. A natural question is whether such improved energy decay can lead to global convergence of the dynamics to the global minimum of the energy. As it turns out, the answer is yes: the fixed points of the birthdeath PDEs (12) and (13) are the global minimizers of the energy , as we prove in Section 4. How to implement a particle dynamics consistent with (13) is discussed in Sections 5 and 6.
While the birthdeath dynamics described above ensures convergence in the meanfield limit, when is finite, particles can only be created in proportion to the empirical distribution In particular, such a birth process corresponds to “cloning” or creating identical replicas of existing particles. In practice, there may be an advantage to exploring parameter space with a distribution distinct from the instantaneous empirical particle distribution (7). To enable this exploration we introduce a birth term proportional to a distribution which we will assume has full support on . In this case, the time evolution of the distribution is described by
(15)  
where , , , , and , That is, we kill particles in proportion to in region where but create new particles from in regions where .
This new birthdeath dynamics also satisfies the consistency conditions of Proposition 3.1:
Proposition 3.2
Let be a solution of (15), with . Then, for all , and satisfies
(16) 
4 Convergence of Transport Dynamics with BirthDeath
Here, we compare the solutions of the original PDE (8) with those of the PDE (13) with birthdeath. We restrict ourselves to situations where and in (11) are such that is bounded from below. Our main technical contributions are results about convergence towards global energy minimizer as well as convergence rates as the dynamics approaches these minimizers. We consider separately the noninteracting and the interacting cases.
Under gradient descent dynamics, global convergence can be established with appropriate assumptions on the initialization and architecture of the neural network. [MMN18]
establishes global convergence and provides a rate for neural networks with bounded activation functions evolving under stochastic gradient descent. Similar results were obtained in
[CB18b, RVE18b], in which it is proven that gradient descent converges to the globally optimal solution for neural networks with particular homogeneity conditions on the activation functions and regularizers. Closely related to the present work, [WLLM18] provides a convergence rate for a “perturbed” gradient flow in which uniform noise is added to the PDE (8). It should be emphasized that, unlike our formulation, the addition of uniform noise changes the fixed point of the PDE and convergence to only an approximate global solution can be obtained in that setting.4.1 Noninteracting Case
We consider first the noninteracting case with and , under
Assumption 4.1
is a Morse function, coercive, and with a single global minimum located at .
With no loss of generality we set since adding an offset to in (13) does not affect the dynamics. We also denote by the Hessian of at : recall that a Morse function is such that its Hessian is nondegenerate at all its critical points (where ) and it is coercive if . Our main result is
Theorem 4.2 (Global Convergence and Rate: Noninteracting Case)
Assume that the initial condition of the PDE (12) has a density positive everywhere in . Then under Assumption 4.1 the solution of (12) satisfies
(17) 
In addition we can quantify the convergence rate: if , then such that , the time needed to reach satisfies
(18) 
Furthermore the rate of convergence becomes exponential in time asymptotically: then for all , such that
(19) 
In fact we show that
(20) 
The theorem is proven in Appendix A and is based on the following lemma obtained by solving the PDE (12) by the method of characteristics:
Lemma 4.3
The proof of Theorem 4.2 shows that the additional birthdeath terms in the PDE (12) allow the measure to concentrate rapidly in the vicinity of ; subsequently, the transport term takes over and leads to the exponential rate of energy decay in (19). The proof also shows that, if we remove the transportation term in the PDE (12), the energy only decreases linearly in time asymptotically. This means that the combination of the transportation and the birthdeath terms accelerates convergence. A similar theorem can be proven for the PDE (15).
4.2 Interacting Case
Let us now consider the interacting case, when is given by (9) with . We make
Assumption 4.4
The set is a dimensional differentiable manifold which is either closed (i.e. compact, with no boundaries), or open (i.e. with no closed subset), or the Cartesian product of a closed and an open manifold.
As well as,
Assumption 4.5
The kernel is symmetric, positive semidefinite, and twice differentiable in its arguments, ; . If is not closed, is bounded on , and is bounded from below and coercive in the sense that for any , there exists a compact such that .
Assumption 4.5 guarantees that the quadratic energy in (11) is convex. As a result it has a minimum value. Assumption 4.5 also guarantees that this minimum is only achieved by minimizers; this is a necessary condition that can perhaps be obtained under weaker assumptions. It should be noted that the technical assumptions on
in the nonclosed case prohibit some nonlinearities, such as ReLU. Furthermore, if the nonlinearity is homogeneous of degree one, then the proof can be adapted to the case that
with compact. These global minimizers may not be unique, and satisfy:(23) 
where . These equations are well known [Ser15], and it is also known that any solution to these equations has compact support (which may be if is closed). We recall the derivation of these conditions in Appendix B.
A wellknown issue with the PDE (8) is that it potentially has many more fixed points than has minimizers: Indeed, rather than (23), these fixed points only need to satisfy
(24) 
It is therefore remarkable that, if we pick an initial condition for the birthdeath PDE (13) that has full support, the solution to this equation converges to a global minimizer of :
Theorem 4.6 (Global Convergence to Global Minimizers: Interacting Case)
This theorem is proven in Appendix C. One aspect of the argument is based on the evolution equation (14) for . Since , by the bounded convergence theorem, the evolution must stop eventually since is bounded from below by Assumption 4.5. This happens when both integrals in (14) are zero, i.e. when the first equation in (23) as well as (24) are satisfied. What remains to be shown is that the second equation in (23) must also be satisfied, which is done in Appendix C.
Regarding the rate of convergence, we have the following result:
Theorem 4.7 (Asymptotic Convergence Rate: Interacting Case)
Let denote the solution of (13) for the initial condition with and with a density . Assume that, as where has a density such that . Let . Then, for any , such that
(26) 
The proof of this theorem is given in Appendix D where we prove
(27) 
If , then a proof following the same steps carries through and rate of convergence proportional to holds.
5 From Meanfield to Particle Dynamics with BirthDeath
Here we show how to design a particle dynamics consistent with (13) at mean field level () and analyze the scale of the fluctuations above that limit at finite . For simplicity we consider the noninteracting case when and, without loss of generality, we assume that –particle dynamics consistent with (13) in the general case are considered below in Sec. 6. We proceed formally, and leave the details of a rigorous derivation of these results to a future publication.
The birthdeath dynamics is represented as a continuous time Markov process in which the particles have an exponential survival/duplication probability. To realize this process, we equip each particle evolving by the GD flow (6) with an exponential clock with rate such that: with probability , the particle will be duplicated during the small interval , and a particle chosen at random in the stack will be killed to preserve the population size. If we focus only on this part of the dynamics, it is easy to write the infinitesimal generator of the measure valued Markov process [Daw06] it induces at the level of the particle distribution defined in (7): the action of this generator on a functional evaluated on reads
(28)  
The operator in the second equality is obtained using the properties of the Dirac in the first equality, and this operator is now defined on any probability measure . If we formally take the limit as , we deduce that with
(29)  
This is the generator of the PDE (using )
(30) 
with . This equation is nothing but the PDE (13) when
and we neglect the transport term. That is, we have formally established that this PDE arises in the meanfield limit as a consequence of the Law of Large Numbers (LLN) for the process for the particles described above.
Quantifying the fluctuations above the LLN essentially amounts to going to next order in the expansion in . Proceeding similarly as we did to derive (29), we deduce that, as ,
(31)  
This is a second order operator, which indicates that at next order we should formally turn (30
) into a stochastic differential equation by adding to this equation a Gaussian whitenoise term of order
with a covariance structure consistent with (31). Since this noise is small whenis large, it is simpler and more precise to phrase this result in term of a Central Limit Theorem (CLT). Specifically, if
and is the solution of (30) then, as ,(32) 
where is Gaussian random distribution whose equation can be obtained by linearizing the aforementioned stochastic equation around the solution of (30). Formally
(33) 
where is a whitenoise term with covariance consistent with (31):
(34)  
Since is Gaussian with zero mean, all its information is contained in its covariance , for which we can derive an equation from (33)
(35)  
It is easy to see that the fluctuations are massconserving, in the sense that for all since this is true initially and . If we add the transport (i.e. the particles evolve by GD beside the birthdeath process described above), we get an additional term at the right hand side of (35). A similar equation also holds for the interacting case, and essentially amounts to replacing by —the particle process with transport and birthdeath that arises in this case is described in Sec. 6.
The main conclusion to draw from the developments above is that implementing a birthdeath process on top of the GD dynamics is simple and it leads to the PDE (13) in the mean field limit, with small Gaussian fluctuations above that limit that scale as . These fluctuations are controlled on finite time intervals, and can be made to disappear at long time scales, when is closed to converged, by decreasing to zero according to some schedule. Note also that the argument above explains why should be kept in to control the fluctuations, even though sending to infinity would help at mean field level— in practice, remains finite, and the limits , do not commute, which requires keeping independent of the size of the network.
6 Algorithms
Numerical schemes that converge to the PDEs presented in Sec. 3 are both straightforward to interpret and easy to implement. We denote by a configuration of particles in the interacting potential in (1), and by the solution of the GD flow in this potential
(36) 
with and . The parameters evolve according to the procedure defined in Algorithm 1.
The algorithm implements gradient descent with birthdeath for the particles . After each time interval , the contribution of each particle to the empirical potential is computed. The continuous time Markov process described in Sec. 5 is then realized by removing/duplicating particles depending on the sign of Positive corresponds to decreasing mass in the PDE (12); as such, those particle are survive with exponentially decaying probability. On the other hand, negative means that the mass is locally increasing at meanfield level, so the particles duplicate with an exponential rate.
For some optimization problems, e.g. neural networks, calculating
directly is not possible, as it requires an integral over the data distribution. However, in this case, stochastic gradient descent provides an unbiased estimator for
and its gradient at no additional computational cost.We define the procedures clone and kill so that the particle system asymptotically converges to the birthdeath dynamics (12). An algorithm for the reinjection dynamics (15) is similar and is given explicitly in Appendix E.
If the prior is used in the dynamics, the kill and clone procedures must be implemented differently. Particles that have can be duplicated in precisely the same fashion as in the first scheme. When , however, the kill procedure amounts to replacing the particle with a randomly sampled In our instantiation of the reinjection scheme, we use
(37) 
where we denote componentwise and we have assumed homogeneity of degree one of the nonlinearity in the first component, , . With this choice, the terms involving in (15) disappear since the prior has no mass on .
7 Numerical Experiments
7.1 Mixture of Gaussians
We take as an illustrative example a mixture of Gaussians in dimension ,
(38) 
which we approximate as a neural network with Gaussian nonlinearities with fixed standard deviation
,(39) 
denoting the parameters This is a useful test of our results because we can do exact gradient descent dynamics on the meansquared loss function:
(40) 
Because all the integrals are Gaussian, this loss can be computed analytically, and so can and its gradient.
In Fig. 1, we show convergence to the energy minimizer for a mixture of three Gaussians (details and source code are provided in the SM). The nonlocal mass transport dynamics dramatically accelerates convergence towards the minimizer. While gradient descent eventually converges in this setting—there is no metastability—the dynamics are particularly slow as the mass concentrates near the minimum and maxima of the target function. However, with the birthdeath dynamics, this mass readily appears at those locations. The advantage of the birthdeath dynamics with a reinjection distribution is highlighted by choosing an unfavorable initialization in which the particle mass is concentrated around In this case, both GD and GD with birthdeath (12) do not converge on the timescale of the dynamics. With the reinjection distribution, new mass is created near and convergence is achieved.
Top left: Convergence of the gradient descent dynamics without birthdeath, with birthdeath, and using a reinjection distribution. Top right: For appropriate initialization, the three dynamical schemes all converge to the target function. Bottom left: For bad initialization (narrow Gaussian distributed around y=2), GD and GD+birthdeath do not converge on this timescale. Interestingly, with the reinjection via distribution
, convergence to the global minimum is rapidly achieved. Bottom right: The configuration of the particles in . Only with the reinjection distribution does mass exist near .7.2 StudentTeacher ReLU Network
In many optimization problems, it is not possible to evaluate exactly. Instead, typically is estimated as a sample mean over a batch of data. We consider a studentteacher setup similar to [CB18a] in which we use single hidden layer ReLU networks to approximate a network of the same type with fewer neurons. We use as the target function a ReLU network with 50 input and 10 hidden units. We approximate the teacher with neural networks with neurons (see SM). The networks are trained with stochastic gradient descent (SGD) and the minibatch estimate of the gradient of output layer, which is computed at each step of SGD, is used to compute which determines the rate of birthdeath. In experiments with the reinjection distribution, we use (37) with Gaussian
As shown in Fig. 2, we find that the birthdeath dynamics accelerates convergence to the teacher network. We emphasize that because the birthdeath dynamics is stochastic at finite particle numbers, the fluctuations associated with the process could be unfavorable in some cases. In such situations, it is useful to reduce as a function of time. On the other hand, in some cases we have observed much more dramatic accelerations from the birthdeath dynamics.
8 Conclusions
The success of an optimization algorithm based on gradient descent requires good coverage of the parameter space so that local updates can reach the minima of the loss function quickly. Our approach liberates the parameters from a purely local dynamics and allows rapid reallocation to values at which they can best reduce the approximation error. Importantly, we have constructed the nonlocal birthdeath dynamics so that it converges to the minimizers of the loss function. For a very general class of minimization problems—both interacting and noninteracting potentials—we have established convergence to energy minimizers under the dynamics described by the meanfield PDE with birthdeath. Remarkably, for interacting systems with sufficiently shortranged interaction terms we can guarantee global convergence. We have also computed the asymptotic rate of convergence with birthdeath dynamics.
These theoretical results translate to dramatic reductions in convergence time for our illustrative examples. It is worth emphasizing that the schemes we have described are straightforward to implement and come with little computational overhead. Extending this type of dynamics to deep neural network architectures could accelerate the slow dynamics at the initial layers often observed in practice. Hyperparameter selection strategies based on evolutionary alogorithms [SMC17] provide another interesting potential application of our approach.
While we have characterized the basic behavior of optimization under the birthdeath dynamics, many theoretical questions remain. More explicit calculations of global convergence rates for the interacting case and tighter rates for the noninteracting case would be exciting additions. The proper choice of is another question worth exploring because, as highlighted in our simple example, favorable reinjection distributions can rapidly overcome slow dynamics. Finally, a meanfield perspective on deep neural networks would enable us to translate some of the guarantees here to deep architectures.
References

[Bar93]
A R Barron.
Universal approximation bounds for superpositions of a sigmoidal function.
IEEE Transactions on Information Theory, 39(3):930–945, May 1993. 
[BC95]
Shumeet Baluja and Rich Caruana.
Removing the genetics from the standard genetic algorithm.
In Machine Learning Proceedings 1995, pages 38–46. Elsevier, 1995.  [CB18a] Lénaïc Chizat and Francis Bach. A Note on Lazy Training in Supervised Differentiable Programming. working paper or preprint, December 2018.
 [CB18b] Lénaïc Chizat and Francis Bach. On the global convergence of gradient descent for overparameterized models using optimal transport. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3040–3050. Curran Associates, Inc., 2018.
 [Cyb89] G Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control Signal Systems, 2(4):303–314, December 1989.
 [Daw06] Donald Dawson. Measurevalued Markov processes. In École d’Été de Probabilités de SaintFlour XXI—1991, pages 1–260. Springer Berlin Heidelberg, Berlin, Heidelberg, September 2006.

[Han06]
Nikolaus Hansen.
The cma evolution strategy: a comparing review.
In
Towards a new evolutionary computation
, pages 75–102. Springer, 2006.  [JDO17] Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
 [Ken11] James Kennedy. Particle swarm optimization. In Encyclopedia of machine learning, pages 760–766. Springer, 2011.
 [KSH12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 [LL01] Pedro Larrañaga and Jose A Lozano. Estimation of distribution algorithms: A new tool for evolutionary computation, volume 2. Springer Science & Business Media, 2001.
 [MMN18] Song Mei, Andrea Montanari, and PhanMinh Nguyen. A mean field view of the landscape of twolayer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, August 2018.
 [OAAH17] Yann Ollivier, Ludovic Arnold, Anne Auger, and Nikolaus Hansen. Informationgeometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18):1–65, 2017.

[PS91]
J Park and I W Sandberg.
Universal Approximation Using RadialBasisFunction Networks.
Neural Computation, 3(2):246–257, June 1991.  [RS13] Luis Miguel Rios and Nikolaos V Sahinidis. Derivativefree optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247–1293, 2013.
 [RVE18a] Grant Rotskoff and Eric VandenEijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7146–7155. Curran Associates, Inc., 2018.
 [RVE18b] Grant M. Rotskoff and Eric VandenEijnden. Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error. arXiv:1805.00915 [condmat, stat], May 2018. arXiv: 1805.00915.
 [Ser15] Sylvia Serfaty. Coulomb Gases and Ginzburg–Landau Vortices. European Mathematical Society Publishing House, Zuerich, Switzerland, March 2015.
 [SHC17] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
 [SHK14] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 [SMC17] Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
 [SS18] Justin Sirignano and Konstantinos Spiliopoulos. Mean Field Analysis of Neural Networks. arXiv, May 2018. arXiv: 1805.01053v1.
 [WLLM18] Colin Wei, Jason D. Lee, Qiang Liu, and Tengyu Ma. On the Margin Theory of Feedforward Neural Networks. arXiv:1810.05369 [cs, stat], October 2018. arXiv: 1810.05369.
Appendix A Convergence and Rates in the Noninteracting Case
a.1 Noninteracting Case without the Transportation Term
Let us look first at the PDE satisfied by the measure in the noninteracting case, i.e. with satisfying Assumption 4.1, and without the transportation term:
(41) 
where . This equation can be solved exactly. Assuming that has a density everywhere positive on , has a density given by
(42) 
The normalization condition leads to:
Therefore, by plugging this last expression in equation (42), we obtain the explicit expression
(43) 
We can use this equation to express the energy :
(44) 
where is the function defined as:
(45) 
At late times, the factor focuses all the mass in the vicinity of the global minimum of . Therefore, we can neglect the influence of the density in this integral. More precisely a calculation using the Laplace method indicates that
(46) 
where is the Hessian at the global minimum located at , and indicates that the ratio of both sides of the equation tend to 1 as . This shows that
(47) 
a.2 Noninteracting Case with Transportation and BirthDeath
a.2.1 Proof of Theorem 4.2
We first prove the following intermediate result
Lemma A.1
Let arbitrary, and define
Then
(48) 
Proof: By slightly abusing notation, we define
We consider the following Lyapunov function:
(49) 
Its time derivative is
(50) 
By definition, we have
(51) 
We also have