I Introduction
Understanding the effect of depth and width on inference is among the central goals of the modern theory of neural networks. Recent theoretical advances have elucidated the behavior of networks in the infinitewidth limit, in which the complexity introduced by depth washes out and inference is described by Gaussian process regression [24, 32, 18, 23, 34, 15, 17]. However, inference at finite widths, where hidden layers retain the flexibility to learn taskrelevant representations, remains incompletely understood [2, 37, 36, 17, 26]. In the setting of gradientbased maximum likelihood optimization, some insights have been gained through the study of finite overparameterized deep linear neural networks [11, 28, 35, 4]. In the fully Bayesian setting, the behavior of this simple class of models has been characterized in several limiting cases, including asymptotically at large but finite width [24, 2, 36, 19, 26]. However, a unifying perspective on these results is lacking, and our understanding of inference in deep linear Bayesian neural networks (henceforth BNNs) at finite width remains incomplete.
Here, we make the following contributions toward a more comprehensive understanding of BNN inference:

We express the moment generating function of the posterior predictive of a finite, overparameterized deep
BNN as a datadependent continuous scale mixture of Gaussian process (GP) generating functions. This scale average induces coupling across output channels, and compliments previous interpretations of deep BNNs in terms of mixing over an adaptive kernel distribution [25, 1]. This observation is mathematically straightforward, but yields some useful insights into inference in finite BNNs. We extend this argument to compute the posterior mean feature kernel of the network’s first layer, allowing us to study the representations learned by finite BNNs. 
We study the asymptotic behavior of these scale mixtures in several limits, allowing us to connect our results to previous work on the asymptotics of BNNs [36, 2, 26, 19]. We identify several interesting areas for future investigation, and point to challenges for precise characterization of how BNNs behave in certain asymptotic regimes.
Ii Setup
We begin by defining our setup and our notation, which is mostly standard [14, 30, 21, 31]. Depending on context, will denote the
norm on vectors or the Frobenius norm on matrices. We will use the shorthand that integrals without specified domains are taken over all real matrices of the implied dimension. We use the standard Loewner order on real symmetric matrices, such that
(respectively ) means that the matrix is positive semidefinite, or PSD (respectively positivedefinite, or PD). For a matrix , we let be its rowmajor vectorization. Then, denoting the Kronecker product by , we have for conformable matrices , , and . For brevity, we define the shorthand .For a set of compatiblysized matrices , , …, , we define a depth BNN as the linear map
(1) 
We will assume that the “hidden layer widths” are all greater than or equal to the output dimension , such that the rank of the endtoend weight matrix is not constrained by an intermediate bottleneck. We make the standard choice of isotropic Gaussian priors over the weight matrices:
(2) 
with variances chosen such that the prior variances of the activations at any layer do not diverge with increasing width
[20, 24, 32, 18, 23, 34, 15]. One could allow general layerdependent variances , but for BNNs the additional factors can always be absorbed into the definition of the input so long as they are finite and nonzero. Thus, for the sake of notational clarity, we make the simplest choice of prior variances.For a training dataset of examples, we choose an isotropic Gaussian likelihood
(3) 
we will refer to the inverse variance as the inverse temperature in analogy with statistical mechanics. The Bayes posterior over the weight matrices is then given up to normalization as .
We collect the training inputs and targets into data matrices and with elements and , respectively. We will sometimes find it useful to consider a differentiated test dataset with corresponding data matrices and . For these data, we define the associated normalized Gram matrices , , , , and . Our assumptions on the data will be given purely in terms of conditions on these Gram matrices. In particular, we will assume that the training input Gram matrix is invertible; other conditions will be introduced as needed. We note that this invertibility condition, combined with our assumption that the hidden layer widths are wide enough such that the endtoend weight matrix is not rankconstrained, means that the
BNNs we consider can linearly interpolate their training data, and are thus overparameterized.
Iii Scaleaveraging in deep BNNs
Iiia The functionspace prior as a scale mixture
We begin with the nearly trivial observation that, for some input data matrix , the induced prior over network outputs can be expressed as a continuous scale mixture of matrix Gaussians. This expression will prove useful in our subsequent study of the posterior predictive by allowing us to compute integrals over network outputs rather than over network weights. This simplification is allowed thanks to the fact that the likelihood (3) models the targets as being independent of the parameters given the network outputs.
Recall from §II that the prior distribution of the first layer’s weight matrix is a matrix Gaussian:
(4) 
Then, for fixed, the distribution of induced by the prior over
can be read off using the properties of the matrix Gaussian under linear transformations
[30]:(5) 
where we have recognized the normalized Gram matrix and defined the matrix
(6) 
For this to make sense, both and must be of full rank. As stated in §II, we assume the dataset to be such that is invertible. Moreover, denoting the prior distribution over induced by the priors over by , the stated assumption that , implies that is invertible almost surely [14, 30].
Using the law of total expectation, we conclude that the prior over outputs for any fixed set of inputs with invertible Gram matrix is a continuous scale mixture of matrix Gaussians, with the prior density explicitly given as
(7) 
For a depthtwo network, is a Wishart distribution
, which simplifies to a scalar Gammadistributed random variable
when [30]. These results allow one to easily write down the density of with respect to Lebesgue measure in the twolayer case. We note that the density of is expressible for deeper networks in terms of the Meijer function [9]; we will not further pursue this line of analysis in the present work. One could also integrate out weight matrices beyond , but this would yield more complicated formulas for the prior density, which do not permit easy analysis of the posterior predictive [36, 37]. In particular, it is unclear how one might obtain an exact expression for the joint functionspace prior density for all examples [37].IiiB The cumulant generating function of the posterior predictive
We now exploit the observations of the previous section to study the posterior predictives of finite
BNNs. To do so, we will consider the moment generating function
(8) 
of the posterior predictive for some test data . To leverage the mixtureofGaussians interpretation of the prior, we express the generating function as an integral over function outputs and , yielding
(9) 
in terms of the joint prior , where the implied constant of proportionality ensures that . Here, the joint prior is given by substituting the combined dataset into (7) under the temporary assumption that the Gram matrix of the combined dataset is invertible.
We now exchange integration over and with expectation over the scale matrix , which allows us to evaluate the Gaussian integrals over and exactly. This calculation is easily performed using rowmajor vectorization [21]; we defer a detailed sketch to the Appendix and merely summarize the result here. We define the symmetric matrix
(10) 
the symmetric matrix
(11) 
and the dimensional vector
(12) 
We let
be a probability measure over
positive semidefinite matrices, defined by its density(13) 
with respect to ; the implied constant of proportionality ensures that . Then,
(14) 
From this moment generating function, we can immediately read off that the mean and covariance of the posterior predictive are
(15) 
and
(16) 
respectively, where we define the matrix . We remark that all of these results extend to the training set predictor with the replacement .
We recognize (14) as a scaleaverage of the GP generating function of a singlelayer BNN, for which [24, 32, 18, 23, 34, 15, 31]. Similarly, the mean predictor is a scaleaverage of GP mean predictors, while the predictor covariance includes an additional term beyond the average of the GP covariance, as per the law of total covariance [30]. We emphasize that the scale distribution is datadependent: depth allows the
BNN to adaptively couple its output channels in a way that a singlelayer network cannot. We finally remark that, unlike in studies of gradientbased maximum likelihood estimation in deep linear networks
[11, 28, 35, 4], no exceptional assumptions on the weight distribution or data are required to obtain this intuitive picture.The above results are rendered somewhat complicated by the need to average over PSD matrices. If , the situation simplifies substantially, as the scale variable is now a scalar , and the Kronecker products can be eliminated. Concretely, for , we define
(17)  
(18)  
(19) 
and let be a probability measure on , defined by its density
(20) 
with respect to ; the implied constant of proportionality ensures that . Then, the cumulant generating function of the posterior predictive of a deep BNN with scalar output can be expressed as
(21) 
From this, we obtain correspondingly simplified expressions for the predictor mean and covariance, which reduce to
(22) 
and
(23) 
respectively. Again, these results represent scaleaverages of shallow GP predictors, but they are of a simpler form thanks to the lack of mixing between outputs. Even in this simplified setting, and even if one makes a further restriction to the case in which there is only a single training example, the averages defy exact analysis for general values of the hyperparameters due to the terms of the form
in the exponent [9].IiiC The zerotemperature limit
Though analysis of the scale distribution is challenging for general values of the likelihood variance, the situation simplifies somewhat in the zerotemperature limit of vanishing likelihood variance. In this limit, the likelihood tends to a collection of Dirac masses that enforce the constraint that the BNN interpolates its training set. For this interpretation to be sensible at the level of the posterior predictive, the training dataset must be linearly interpolatable, i.e., there must exist some matrix such that . We will focus on this case, and operate under the assumption that the training dataset Gram matrix is invertible. Then, we expect all expectations over to be sufficiently regular such that we can interchange the limit in with the integrals, which should allow us to compute them using the pointwise limit of the density . We note that, though this limit is convenient for theoretical analysis, it is somewhat unnatural from a Bayesian perspective, as it models the targets as a deterministic function of the outputs [20, 33].
Under these regularity assumptions, we expect to have the almostsure lowtemperature limit , which yields the almostsure limiting behavior
(24)  
(25) 
Then, the limiting mean predictor simplifies to
(26) 
This precisely corresponds to the leastnorm pseudoinverse solution to the system , which is intuitively sensible. Moreover, we have the limiting covariance
(27) 
which is precisely the GP posterior samplesample covariance, multiplied by a coupling between output channels.
This argument also yields an approximate density
(28) 
In the twolayer case, where follows a Wishart distribution, the limiting density of with respect to Lebesgue measure on PSD matrices should then be given as
(29) 
This implies that follows a matrix generalized inverse Gaussian (MGIG) distribution at low temperatures [8, 10]:
(30) 
This observation yields several insights. First, it implies that the moment generating functions of and are given in terms of Bessel functions of matrix argument of the second kind [8, 10, 7, 13, 9]. Second, neither the mean nor the reciprocal mean of the MGIG are known in closed form for general values of the parameters [8, 10]. We will therefore resort to studying the behavior of these expectations in various asymptotic limits in §V. However, reasonably efficient algorithms for sampling from the MGIG are available; the situation is of course particularly simple when [10]. Therefore, this formulation could allow faster numerical studies of twolayer BNNs at low temperatures than is possible through naïve sampling of the weights, as the dimensionality of the search space is reduced from to .
Iv Average firstlayer feature kernels in BNNs
We now use the methods of §III to study the average feature kernels of deep BNNs. For technical convenience, we restrict our attention to the kernel of the first hidden layer evaluated on the training set:
(31) 
Then, we can proceed as before to integrate out of the posterior moment generating function of :
(32) 
As discussed in the Appendix, the required computation is straightforward as all integrals are Gaussian. Whereas we considered the full generating function of the posterior predictive, we focus only on the posterior mean of the kernel. Defining the scaledependent matrix
(33) 
for is the matrix , the posterioraveraged feature kernel can be expressed as
(34) 
As the shallow GP result is simply , this yields a natural interpretation of the mean kernel of a finitewidth deep BNN as the GP kernel plus some correction. Though the complexity of the matrix renders this result somewhat less than fully transparent, the situation again simplifies for the case of scalar output, for which we have
(35) 
We observe that the first term in this result is the outer product of the nonscaleaveraged mean training set predictor with itself. Strikingly, the matrix —when evaluated at —is precisely the matrix that appears as the asymptotic correction to the average kernel computed in our previous work [36].
V Asymptotic behavior of BNNs
We now consider the asymptotic behavior of BNNs in various limits, allowing us to connect our results to those of previous works. So as to make contact with as many previous works as possible [24, 32, 18, 23, 34, 15, 36, 26, 19, 2], we will largely focus on the behavior of the average kernel . In all cases, we will assume that the hidden layer widths are of a comparable scale , such that the ratios remain fixed as is taken to be large. As in the rest of the paper, we will assume that is invertible, which requires that . Thus, in limits in which is taken to be large, we implicitly also take to be large. We will only consider limits in which the depth is held fixed and finite, or, at least, tends to infinity far more slowly than the hidden layer width, such that is perturbatively small [36, 26, 12]. For the sake of analytical tractability, will often restrict our attention to twolayer networks () in the zerotemperature limit (). For notational brevity, we define the ratios and , which, under our assumptions, are bounded as .
Va , and fixed
We first consider the regime in which the hidden layer widths tend to infinity with fixed input dimension, output dimension, training dataset size, and depth. This is the most commonly considered asymptotic regime for BNNs [24, 32, 18, 23, 34, 15, 36, 26]. In this limit, a simple saddlepoint argument shows that the datadependence in can be neglected, and that the expectations over should be dominated by the mode of , which is [29]. Applying this result to evaluate the expectations in the posterior predictive (14), we recover the expected correspondence between infinitelywide BNNs and Gaussian processes [24, 32, 18, 23, 34, 15, 36, 26].
Moreover, we can use this simple argument to recover the leading asymptotic correction to the average hidden layer kernel computed in our previous work [36]. As the expectation in (34) carries an overall factor of , the leading correction is simply given by evaluating at the saddlepoint value of ; corrections to the saddle point at large but finite widths will lead subleading corrections to the kernel [29]. After some algebraic simplification, this yields
(37) 
where . This matches the result of [36], and is consistent with the observation in §IV that the finitewidth kernel in the scalar output setting is simply the average of the asymptotic correction over scales. More generally, one could treat as a small perturbation of the identity, and use perturbative methods similar to those of our previous work to recover the results on corrections to predictor statistics given there [36].
VB , and fixed
We next consider the regime in which the dataset size is taken to be large relative to the hidden layer width and output dimension. This regime is of interest because one expects posterior concentration to occur in the largedataset regime [20, 33]. Focusing on the zerotemperature limit, we make the simple approximation of neglecting all terms in the density (29) that do not scale with , leaving . We then evaluate the integral over by a saddlepoint approximation, yielding . This yields an average kernel of
(38) 
under the reasonable assumption that is of full rank in this regime. Notably, the correction to the GP kernel need not be vanishingly small.
VC and fixed
We now consider the limit in which the hidden layer width and training dataset size tend to infinity for fixed depth and output dimension, as previously studied by Li & Sompolinsky [19]. We focus—as those authors did—on the zerotemperature limit, and restrict our attention to the twolayer case for the sake of analytical tractability. Then, exploiting the results of §IIIC, we expect the expectations over to be dominated by the mode of the MGIG. Concretely, we neglect terms of order , while keeping the term as we expect it to be of order . Then, the mode of is determined by a continuous algebraic Ricatti equation (CARE) [7, 10]:
(39) 
Using the fact that the solutions of this equation commute with the matrix [7], this is identical to the defining equation for the “renormalization matrix” of [19] in the depthtwo case. In particular, using the result of §IIIC and §IV, this immediately implies that we recover their results for the predictor statistics and zerotemperature kernel.
After some simplification, the solution to this CARE yields
(40) 
but there may be corrections to the saddle point at nonvanishing (in particular, if grows faster than roughly [29]). To leading order in , we have
(41) 
which is easily seen to agree with the result in the finite regime upon expanding when .
VD and fixed
Our analysis in the preceding sections was facilitated by the fact that the dimensionality of the scale integral remained finite. However, regimes in which the number of outputs tends to infinity with the hidden layer width can also be of interest. In particular, this limit is relevant to the study of autoencoding in high dimensions, and potentially also to classification tasks with many groups (e.g., ImageNet
[27]). Though the same techniques that permit easy asymptotic analysis of other limits cannot be directly applied
[29], the problem of computing kernel statistics can be reformulated as an integral over the kernel matrices themselves, as noted by Aitchison [2]. Then, provided that is held fixed, the kernel can be computed using a saddlepoint approximation. As in the case above, it is easiest to make analytical progress in twolayer networks at zero temperature. There, one finds that the limiting kernel is determined by the solution to the CARE [2, 3](42) 
The solution to this CARE yields
(43) 
In particular, when , we recover Aichison [2]’s result that . More generally, we observe that this result is suggestively similar to the kernel in the case of large and finite . In particular, this result can be recovered by making what is in principle an unjustified naïve Laplace approximation to the integral over as in the preceding section while keeping terms of order and ignoring possible corrections to the saddle point from the highdimensional measure. Further exploration of this will be an interesting subject for future investigation.
VE
Finally, one might consider the regime in which the hidden layer width, output dimension, and dataset size tend jointly to infinity. This regime is more challenging to study than those discussed previously, as there is not a clear way to reduce the problem to a finitedimensional integral. The natural setup for this joint asymptotic limit is a random design teacherstudent setting, in which the input examples are independent and identically distributed samples from some distribution and the targets are generated by a linear model with random coefficient matrix. Then, the BNN problem is closely related to the randomdesign linearrank matrix inference task, which is known to be challenging to analyze [6, 5, 22]. We direct the interested reader to recent works by Barbier and Macris [5] and by Maillard et al. [22] on this problem, and defer more detailed analysis to future work.
Vi Discussion and conclusions
In this short paper, we have studied some aspects of inference in finite overparameterized BNNs. We presented a simple argument that leads to a clear conceptual picture of the effect of depth, and exploited those methods to connect the results of previous studies. However, we note that our approach is specialized to linear networks, and would not extend easily to nonlinear BNNs. Taken together, our results provide some insight into finitewidth effects in a model where depth does not affect the hypothesis class, but does affect inference.
The outputmixing scale average interpretation studied in this work compliments previous interpretations of deep BNNs as mixtures of GPs across a dataadaptive distribution of the kernel that measures similarities between input examples. This interpretation has been pursued in a series of recent works by Aitchison and colleagues [1, 2, 3], starting with the abovementioned work on kernel statistics in deep BNNs [2]. Those authors have also studied a class of models that generalizes this interpretation of deep BNNs by explicitly fixing prior distributions over dataadaptive kernels, resulting in model predictions [2, 3]. This adaptivekernel description has also recently been considered by Pleiss and Cunningham [25], who showed that the mean predictor of a twolayer deep GP can be interpreted as a datadependent mixture of function bases. Their result covers a much broader model class than just BNNs—the class of all possible BNNs, linear or nonlinear, is a degenerate subclass of the set of deep GPs—but does not capture higher moments of the posterior predictive.
For a deep BNNs, the adaptivekernel interpretation arises naturally if one integrates the readout weight matrix out of the prior, rather than integrating out the first layer weight matrix as we did here [1, 2]. Other than in the limit of large output dimension, integrating out rather than affords some advantages—some merely aesthetic, others technical—if one has the specific objective of analytically characterizing inference in BNNs. Both approaches allow for the study of the limit of large width and fixed dataset size, but the need to average over the datasetsizedimensional kernel matrix makes the largedataset limit harder to study in the adaptivekernel interpretation. If one studies the posterior predictive generating function (8) using the adaptive kernel interpretation, one must contend with the need to average over the kernel matrix for the combined traintest set. The blocks of this combined kernel matrix do not appear on equal footing in the generating function because the likelihood only involves the training set; this results in conceptually more complex expressions. Finally, the approach taken here has the advantage of simplifying dramatically for singleoutput networks; such a simplification is not as obvious in the adaptivekernel interpretation.
In concurrent work, Lee et al. [16] have proposed to manually introduce scale mixing to wide BNNs by fixing priors over the prior variances of the last layer’s weights. With this setup, taking the limit of infinite hidden layer width results in a scale mixture of GP predictors. Here, we observe that such scaleaveraging arises naturally as an effect of depth in finitewidth BNNs, hence their setup could be interpreted as manually compensating for the effective loss of depth in an infinite BNN. Based on numerical experiments, they claim that this method can in some cases improve generalization performance relative to that of the fixedscale GP predictor corresponding to the infinitewidth limit of a BNN with fixed prior weight variances. However, importantly, their setup does not consider coupling across multiple output channels. Comprehensive investigation of when dataadaptive scale mixing yields better generalization performance than a fixedscale GP will be an interesting subject for future investigation.
To conclude, the results of this work illustrate several important conceptual points. Notably, the behavior of networks with many outputs is qualitatively distinct from those with scalar outputs, as there are interactions between output channels which are apparent neither in the scalar output case nor in the limit of infinite width and fixed output dimension. These interactions render both finitesize and asymptotic analyses more challenging. This issue is not merely one of abstract theoretical interest. Rather, it is potentially relevant to attempts to explain empirical results in deep learning. As modern image recognition tasks often include thousands of classes, the ratio of depth to output dimension of realistic networks may nonnegligible
[27]. Thus, we believe that careful analysis of representation learning in joint limits of infinite hidden layer width, output dimension, depth, and dataset size will be an important subject for future work.Acknowledgments
We thank A. Atanasov and B. Bordelon for useful conversations and helpful comments on our manuscript.
References

[1]
(202118–24 Jul)
Deep kernel processes.
In
Proceedings of the 38th International Conference on Machine Learning
, M. Meila and T. Zhang (Eds.), Proceedings of Machine Learning Research, Vol. 139, pp. 130–140. Cited by: item 1, §VI, §VI.  [2] (202007) Why bigger is not always better: on finite and infinite neural networks. In Proceedings of the 37th International Conference on Machine Learning, H. Daumé III and A. Singh (Eds.), Proceedings of Machine Learning Research, Vol. 119, pp. 156–164. Cited by: item 2, §I, §VD, §V, §VI, §VI.
 [3] (2021) Deep kernel machines and fast solvers for deep kernel machines. arXiv preprint arXiv:2108.13097. Cited by: §VD, §VI.
 [4] (2021) Neural networks as kernel learners: the silent alignment effect. arXiv preprint arXiv:2111.00034. Cited by: §I, §IIIB.

[5]
(2021)
Statistical limits of dictionary learning: random matrix theory and the spectral replica method
. arXiv preprint arXiv:2109.06610. Cited by: §VE.  [6] (2016) Rotational invariant estimator for general noisy matrices. IEEE Transactions on Information Theory 62 (12), pp. 7475–7490. Cited by: §VE.
 [7] (2003) Laplace approximation for Bessel functions of matrix argument. Journal of Computational and Applied Mathematics 155 (2), pp. 359–382. Cited by: §IIIC, §VC.

[8]
(1998)
Generalized inverse Gaussian distributions and their Wishart connections
. Scandinavian Journal of Statistics 25 (1), pp. 69–75. Cited by: §IIIC.  [9] (2021) NIST Digital Library of Mathematical Functions. Note: http://dlmf.nist.gov/, Release 1.1.1 of 20210315F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds. Cited by: §IIIA, §IIIB, §IIIC.
 [10] (2016) The matrix generalized inverse Gaussian distribution: properties and applications. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 648–664. Cited by: §IIIC, §VC.
 [11] (1998) Effect of batch learning in multilayer neural networks. In Proceedings of the 5th International Conference on Neural Information Processing, pp. 67–70. Cited by: §I, §IIIB.
 [12] (2021) Random neural networks in the infinite width limit as Gaussian processes. arXiv preprint arXiv:2107.01562. Cited by: §V.
 [13] (1955) Bessel functions of matrix argument. Annals of Mathematics, pp. 474–523. Cited by: §IIIC.
 [14] (2012) Matrix analysis. Cambridge University Press. Cited by: Appendix A, Appendix A, §II, §IIIA.
 [15] (2020) Exact posterior distributions of wide Bayesian neural networks. arXiv preprint arXiv:2006.10541. Cited by: §I, §II, §IIIB, §VA, §V.
 [16] (2021) Scale mixtures of neural network Gaussian processes. arXiv preprint arXiv:2107.01408. Cited by: §VI.
 [17] (2020) Finite versus infinite neural networks: an empirical study. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 15156–15172. Cited by: §I.
 [18] (2018) Deep neural networks as Gaussian processes. In International Conference on Learning Representations, Cited by: §I, §II, §IIIB, §VA, §V.

[19]
(202109)
Statistical mechanics of deep linear neural networks: the backpropagating kernel renormalization
. Phys. Rev. X 11, pp. 031059. External Links: Document Cited by: item 2, §I, §VC, §V.  [20] (1992) A practical Bayesian framework for backpropagation networks. Neural Computation 4 (3), pp. 448–472. Cited by: §II, §IIIC, §VB.
 [21] (2019) Matrix differential calculus with applications in statistics and econometrics. John Wiley & Sons. Cited by: §II, §IIIB.
 [22] (2021) Perturbative construction of meanfield equations in extensiverank matrix factorization and denoising. arXiv preprint arXiv:2110.08775. Cited by: §VE.
 [23] (2018) Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, Cited by: §I, §II, §IIIB, §VA, §V.
 [24] (1996) Priors for infinite networks. In Bayesian Learning for Neural Networks, pp. 29–53. Cited by: §I, §II, §IIIB, §VA, §V.
 [25] (2021) The limitations of large width in neural networks: a deep Gaussian process perspective. In Advances in Neural Information Processing Systems, Vol. 34. Cited by: item 1, §VI.
 [26] (2021) The principles of deep learning theory. arXiv preprint arXiv:2106.10165. Cited by: item 2, §I, §VA, §V.

[27]
(2015)
ImageNet Large Scale Visual Recognition Challenge.
International Journal of Computer Vision (IJCV)
115 (3), pp. 211–252. External Links: Document Cited by: §VD, §VI.  [28] (2013) Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120. Cited by: §I, §IIIB.
 [29] (1995) Laplace approximation of high dimensional integrals. Journal of the Royal Statistical Society: Series B (Methodological) 57 (4), pp. 749–760. Cited by: §VA, §VA, §VC, §VD.
 [30] (2018) Highdimensional probability: an introduction with applications in data science. Vol. 47, Cambridge University Press. Cited by: Appendix A, §II, §IIIA, §IIIA, §IIIB.
 [31] (2006) Gaussian processes for machine learning. Vol. 2, MIT press Cambridge, MA. Cited by: §II, §IIIB.
 [32] (1997) Computing with infinite networks. Advances in Neural Information Processing Systems, pp. 295–301. Cited by: §I, §II, §IIIB, §VA, §V.
 [33] (2020) Bayesian deep learning and a probabilistic perspective of generalization. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 4697–4708. Cited by: §IIIC, §VB.
 [34] (2019) Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760. Cited by: §I, §II, §IIIB, §VA, §V.
 [35] (2021) A unifying view on implicit bias in training linear neural networks. In International Conference on Learning Representations, Cited by: §I, §IIIB.
 [36] (2021) Asymptotics of representation learning in finite Bayesian neural networks. In Advances in Neural Information Processing Systems, Vol. 34. Cited by: item 2, §I, §IIIA, §IV, §VA, §VA, §V.
 [37] (2021) Exact marginal prior distributions of finite Bayesian neural networks. In Advances in Neural Information Processing Systems, Vol. 34. Cited by: §I, §IIIA.
Appendix A Derivation of the posterior predictive generating function
In this short appendix, we sketch the derivations of our results for the moment generating function of the posterior predictive (reported in §III) and the posterior average kernel (reported in §IV). Following the setup in §III, computation of the moment generating function of the posterior predictive requires only the evaluation of a single Gaussian integral, hence we will omit many intermediate steps for brevity. We proceed under the assumption that the combined Gram matrix
(44) 
is invertible; the result extends to the general case by continuity [30, 14].
We start by using the representation of the generating function as an integral over predictions (9) and the expression for the functionspace prior density as a continuous scale mixture (7). Then, assuming that that we can apply Fubini’s theorem to interchange the integrals over and with the integral over , our first task is to evaluate the matrix Gaussian integral
(45) 
where . This integral is easiest to evaluate using rowmajor vectorization, for which . Then, defining the matrix
(46) 
and the vector
(47) 
the integral of interest can be expressed as
(48) 
up to a normalizing factor of . Using properties of the Kroenecker product, we find after a bit of algebra that [14]
(49) 
and
where we have defined the matrices and and the vector as in (10), (11), and (12) of the main text, respectively. We then conclude the desired result upon grouping dependent terms that do not depend on the source into the density .
The kernel statistics may be derived through an analogous procedure. We start with the posterior moment generating function
(50) 
where the source term is defined with a factor of for convenience. As before, the first layer weight matrix can be integrated out, yielding
(51) 
This is again a Gaussian integral, hence it can be evaluated by direct computation using the vectorization method discussed above. After varying the result with respect to , one obtains the formula (34) reported in the main text.
Comments
There are no comments yet.