1 Introduction
Neural networks are ubiquitous model classes in machine learning. Their appealing computational properties, intuitive compositional structure and universal function approximation capacity make them a straightforward choice for tasks requiring complex function mappings.
In such tasks, there is an inevitable battle between flexibility and generalization. Two very different sorts of weapon are popular: one is to incorporate neural networks into the canon of Bayesian inference, using prior distributions over weights to regularize the mapping. However, historically, very simple prior distributions over the weights have been used which are based more on computational convenience than appropriate model specification. Concomitantly, simple approximations to posterior distrbutions are employed, for instance failing to capture the correlations between weights. These frequently fail to to cope with the richness of the underlying inference problem. The second weapon is to reduce the number of effective parameters, typically by sharing weights (as in convolutional neural nets; CNNs). In CNNs, units are endowed with (topographic) locations in a geometric space of inputs, and the weights between units are made to depend systematically (though heterogeneously) on these locations. However, this solution is minutely specific to particular sorts of mapping and task.
Here, we combine these two arms by proposing a higherlevel abstraction in which the units of the network themselves are probabilistically embedded into a shared, structured, metarepresentational space (generalizing topographic location), with weights and biases being derived conditional on these embeddings. Rich structural patterns can thus be induced into the weight distributions. Our model captures uncertainty on three levels: metarepresentational uncertainty, function uncertainty given the embeddings, and observation (or irreducible output) uncertainty. This hierarchical decomposition is flexible, and is broadly applicable to modeling taskappropriate weight priors, weightcorrelations, and weight uncertainty. It can also be beneficially used in the context of various modern applications, where the ability to perform structured weight manipulations online is beneficial.
2 Probabilistic Neural Networks
Let be a dataset of tuples where are inputs and
are targets for supervised learning. Take a neural network (NN) with
layers, units in layer (we drop where this is clear), an overall collection of weights and biases, and fixed nonlinear activation functions. In Bayesian terms, the NN realizes the likelihood
; together with a prior over , this generates the conditional distribution (also known as the marginal likelihood) , where denotes and denotes. One common assumption for the prior is that the weights are drawn iid from a zeromean commonvariance normal distribution, leading to a prior which factorizes across layers and units:
where and index units in adjacent layers of the network, and is the prior weight variance. Maximumaposteriori inference with this prior famously results in an objective identical to weight regularization with regularization constant . Other suggestions for priors have been made, such as (Neal, 1996)’s proposal that the precision of the prior should be scaled according to the number of hidden units in a layer, yielding a contribution to the prior of .
Bayesian learning requires us to infer the posterior distribution of the weights given the data : . Unfortunately, the marginal likelihood and posterior are intractable as they involve integrating over a highdimensional space defined by the weight priors. A common step is therefore to perform approximate inference, for instance by varying the parameters of an approximating distribution to make it close to the true posterior. For instance, in Mean Field Variational Inference (MFVI), we consider the factorized posterior:
Commonly, is Gaussian for each weight, , with variational parameters . The parameters are adjusted to maximize a lower bound to the marginal likelihood given by the Evidence Lower Bound (ELBO) (a relative of the free energy):
(1) 
The mean field factorization assumption renders maximizing the ELBO tractable for many models. The predictive distribution of a Bayesian Neural Network can be approximated utilizing a mixture distribution over sampled instances of weights .
3 MetaRepresentations of Units
We suggest abandoning direct characterizations of weights or distributions over weights, in which weights are individually independently tunable. Instead, we couple weights using metarepresentations (so called, since they determine the parameters of the underlying NN that themselves govern the inputoutput function represented by the NN). These treat the units as the primary objects of interest and embed them into a shared space, deriving weights as secondary structures.
Consider a code that uniquely describes each unit (visible or hidden) in layer in the network. Such codes could for example be onehot codes or Euclidean embeddings of units in a real space . A generalization is to use an inferred latent representation which embeds units in a
dimensional vector space. Note that this code encodes the unit itself, not its activation.
Weighs linking two units can then be recast in terms of those units’ codes and for instance by concatenation . We call the collection of all such weight codes (which can be deterministically derived from the collection of unit codes ). Biases can be constructed similarly, for instance using ’s as the second code; thus we do not distinguish them below. Weight codes then form a conditional prior distribution , parameterized by a function shared across the entire network. Function , which may itself have parameters
, acts as a conditional hyperprior that gives rise to a prior over the weights of the original network:
There remain various choices: we commonly use either Gaussian or implicit observation models for the weight prior and neural networks as hyperpriors (though clustering and Gaussian Processes merit exploration). Further, the weight code can be augmented with a global state variable (making ) which can coordinate all the weights or add conditioning knowledge.
Examples for observation models for weights is a Gaussian model:
and similarly an implicit model:
with, which produces arbitrary output distributions.
4 MetaPrior: A Generative MetaModel of Neural Network Weights
The metarepresentation can be used as a prior for Bayesian NNs. Consider a case with latent codes for units being sampled from , and the weights of the underlying NN being sampled according to the conditional distribution for the weights . Put together this yields the model:
Crucially, the conditional distribution over weights depends on more than one unit representation. This can be seen as a structural form of weightsharing or a function prior and is made more explicit using the plate notation in Fig 1. Conditioned on a set of sampled variables Z, our model defines a particular space of functions . We can recast the predictive distribution given training data as:
Uncertainty about the unit embeddings affords the model the flexibility to represent diverse functions by coupling weight distributions with metavariables.
The learning task for training MetaPriors for a particular dataset consists of inferring the posterior distribution of the latent variables . Compared with the typical training loop in Bayesian Neural Networks, which involves learning a posterior distribution over weights, the posterior distribution that needs to be inferred for MetaPriors is the approximate distribution over the collection of unit variables as the model builds metamodels of neural networks. We train by maximizing the evidence lower bound (ELBO):
(2) 
with and .
In practice, we apply the reparametrization trick (Kingma and Welling, 2014; Rezende et al., 2014; Titsias and LázaroGredilla, 2015) and its variants (Kingma et al., 2015) and subsample to maximize the objective:
(3) 
However, pure posterior inference over without gradient steps to update results in learning metarepresentations which best explain the data given the current hypernetwork with parameters . We call the process of inferring representations illation. Intuitively, inferring the metarepresentations for a dataset induces a functional alignment to its inputoutput pairs and thus reduces variance in the marginal representations for a particular collection of data points.
We can also recast this as a twostage learning procedure, similar to Expectation Maximization (EM), if we want to update the function parameters
and the metavariables independently. First, we approximate by by maximizing . Then, we can maximize of to update the hyperprior function. In practice, we find that Eq. 3 performs well with a small amount of samples for learning, but using EM can help reduce gradient variance in the smalldata setting.We highlight that a particularly appealing property of this ELBO is that the weight observation term appears in both the model and the variational approximation and as such analytically cancels out. We exploit this property in that we can use implicit weight models without having to resort to using a discriminator as is typically the case when using GANstyle inference.
5 Experiments
We illustrate the properties of MetaPriors with a series of tasks of increasing complexity, starting with simple regression and classification, and then graduating to fewshot learning.
5.1 Toy Example: Regression
Here, we consider the toy regression task popularized in (HernándezLobato and Adams, 2015) ( with ). We use neural networks with a fixed observation noise given by the model, and seek to learn suitably uncertain functions. For all networks, we use 100 hidden units, and 2dimensional latent codes and 32 hidden units in the hyperprior network .
In Fig. 2 we show two function fits to this example: our model and a mean field network. We observe that both models increase uncertainty away from the data, as would be expected. We also illustrate function draws which show how each model uses its respective weight representation differently to encode functions and uncertainty. We sample functions in both cases by clamping latent variables to their posterior mean and allowing a single latent variable to be sampled from its prior. Our model, the MetaPriorbased Bayesian NN, learns a global form of function uncertainty and has global diversity over sampled functions for sampling just a single latent variable. It uses the composition of these functions through the metarepresentational uncertainty to model the function space. This can be attributed to the strong complexity control enacted by the pull of the posterior fitting mechanism on the metavariables. Maximum a posteriori fits of this model yielded just the mean line directly fitting the data. The mean field example shows dramatically less diversity in function samples, we were forced to sample a large amount of weights to elicit the diversity we got as single weight samples only induced small local changes. This suggests that the MetaPrior may be capturing interesting properties of the weights beyond what the mean field approximation does, as single variable perturbations have much more impact on the function space.
5.2 Toy Example: Classification
We illustrate the model’s function fit to the halfmoon two class classification task in Fig. 3, also visualizing the learned weight correlations by sampling from the representation. The model reaches 95.5% accuracy, on par with a mean field BNN and an MLP. Interestingly, metarepresentations induce intra and interlayer correlations of weights, amounting to a form of soft weightsharing with longrange correlations. This visualizes the mechanisms by which complex function draws as observed in Sec. 5.1 are feasible with only a single variable changing. The model captures structured weight correlations which enable global weight changes subject to a lowdimensional parametrization. This is a conceptual difference to networks with individually tunable weights.
5.3 MNIST50kClassification
We use NNs with one hidden layer and 100 hidden units to test the simplest model possible for MNISTclassification. We train and compare the deterministic onehot embeddings (GaussianOH), with the latent variable embeddings (GaussianLV) used elsewhere (with 10 latent dimensions); along with mean field NNs with a unit Gaussian prior (GaussianML). We visualize the learned s (Fig. 4) by producing a TSNE embedding of their means which reveal relationships among the units in the network. The figure shows that the model infers semantic structure in the input units, as it compresses boundary units to a similar value. This is representationally efficient as no capacity is wasted on modeling empty units repeatedly.
5.4 FewShot and MultiTask Learning
The model allows taskspecific latent variables to be used to learn representations for separate, but related, tasks, effectively the model to the current input data. In particular, the hierarchical structure of the model facilitates fewshot learning on a new task by inferring , starting from the produced by the original training, and keeping fixed the mapping . We tested this ability using the MNIST network. After learning the previous model for clean MNIST, we evaluate the model’s performance on the MNIST test data in both a clean and a permuted version. We sample random permutations
for input pixels and classes and apply them to the test data, visualized in Fig.
5. This yields three datasets, with permutations of input, output or both. We then proceed to illate on progressively growing numbers of scrambled observations (shots) . For each such attempt, we reset the model’s representations to the original ones from clean training. As a baseline, we train a mean field neural network. We also keep track of performance on the clean dataset as reported in Fig. 5. We examine a related setting of generalization in the Appendix Sec. 6.6 Structured Surgery
In the multitask experiments in Sec. 5.4, we studied the model’s ability to update representations holistically and generalize to unseen variations in the training data. Here, we manipulate the metavariables in a structured and targeted way (see Fig. 6). Since , we can elect to perform illation only on a subset of variables. Instead of learning a new set of taskdependent variables, we only update input or output variables per task to demonstrate how the model can disentangle the representations it models and can generalize in highly structured fashion. When updating only the input variables, the model reasons about pixel transformations, as it can move the representation of each input pixel around its latent feature space. The model appears to solve for an input permutation by searching in representation space for a program approximating . This search is ambiguous, given little data and the sparsity underlying MNIST. This process demonstrates the alignment the model automatically performs only of its inputs in order to change the weight distribution it applies to datasets, while keeping the rest of the features intact. Similarly, we observe that updating only the classlabel units leads to rapid and effective generalization for class shifts or, in this case, class permutation, since only 10 variables need to be updated. The model could also easily generalize to new subclasses smoothly existing between current classes. These demonstrate the ability of the model to react in differentiated ways to shifts, either by adapting to changes in inputgeometry, target semantics or actual features in a targeted way while keeping the rest of the representations constant.
7 Related Work
Our work brings together two themes in the literature. One is the probabilistic interpretation of weights and activations of neural networks, which has been a common approach to regularization and complexity control (MacKay, 1992a, b, 1995; Hinton and Van Camp, 1993; Dayan et al., 1995; Hinton et al., 1995; Neal, 2012; Blundell et al., 2015; HernándezLobato and Adams, 2015)
. The second theme is to consider the structure and weights of neural networks as arising from embeddings in other spaces. This idea has been explored in evolutionary computation
(Stanley, 2007; Gauci and Stanley, 2010; Risi and Stanley, 2012; Clune et al., 2011) and beyond, and applied to recurrent and convolutional NNs and more. Our learned hierarchical probabilistic representation of units, which we call a metarepresentation because of the way it generates the weights, is inspired by this work. It can thus also be considered as a richly structured hierarchical Bayesian Neural network (Finkel and Manning, 2009; Joshi et al., 2016). In important recent work training ensembles of neural networks (Lakshminarayanan et al., 2017) was proposed. This captures uncertainty well; but ensembles are a departure from a single, selfcontained model.Our work is most closely related to two sets of recent studies. One considers reweighting activation patterns to improve posterior inference (Krueger et al., 2017; Pawlowski et al., 2017; Louizos and Welling, 2017). The use of parametric weights and normalizing flows (Rezende and Mohamed, 2015; Papamakarios et al., 2017; Dinh et al., 2016)
to model scalar changes to those weights offers a probabilistic patina around forms of batch normalization. However, our work is not aimed at capturing posterior uncertainty for given weight priors, but rather as a novel weight prior in its own right. Our method is also representationally more flexible, as it provides embeddings for the weights as a whole.
Equally, our metarepresentations of units is reminiscent of the inducing points that are used to simplify Gaussian Process (GP) inference (QuiñoneroCandela and Rasmussen, 2005; Snelson and Ghahramani, 2006; Titsias, 2009), and that are key components in GP latent variable models (Lawrence, 2004; Titsias and Lawrence, 2010). Rather like inducing points, our units control the modeled function and regularize its complexity. However, unlike inducing points, the latent variables we use do not even occupy the same space as the input data, and so offer the blessing of extra abstraction. The metarepresentational aspects of the model can be related to Deep GPs, as proposed by (Damianou and Lawrence, 2013).
The most characteristic difference of our work to other approaches is the locality of the latent variables to units in the network, abstracting a neural network into subfunctions which are embedded in a shared structural space while still capturing dependences between weights and allowing a compact hypernetwork to not have to generate all weights of a network at once. In our case, the fact that the unit latent variables are local facilitates modeling structural dependencies in the network which leads to the ability to model weights in a more distributed fashion. This facilitates modeling adaptive subnetworks in a generative fashion as well as breaks down the immense dimensionality of weight tensors and renders the learning problem of correlated weights more tractable.
Finally, as is clearest in the permuted MNIST example, the hypernetwork can be cast as an interpreter, turning one representation of a program (the unit embeddings) into a realized method for mapping inputs to outputs (the NN). Thus, our method can be seen in terms of program induction, a field of recent interest in various fields, including concept learning (Liang et al., 2010; Perov and Wood, 2014; Lake et al., 2015, 2018).
8 Discussion
We proposed a metarepresentation of neural networks. This is based on the idea of characterizing neurons in terms of predetermined or learned latent variables, and using a shared hypernetwork to generate weights and biases from conditional distributions based on those variables. We used this metarepresentation as a function prior, and showed its advantageous properties as a learned, adaptive, weight regularizer that can perform complexity control in function space. We also showed the complex correlation structures in the input and output weights of hidden units that arise from this metarepresentation, and demonstrated how the combination of hypernetwork and network can adapt to outoftask generalization settings and distribution shift by realigning the networks to the new data. Our model handles a variety of tasks without requiring taskdependent manuallyimposed structure, as it benefits from the
blessing of abstraction (Goodman et al., 2011) which arises when rich structured representations emerge from hierarchical modeling.We believe this type of modeling jointly capturing representational uncertainty, function uncertainty and observation uncertainty can be beneficially applied to many different neural network architectures and generalized further with more interestingly structured metarepresentations.
References
 Blundell et al. [2015] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
 Clune et al. [2011] J. Clune, K. O. Stanley, R. T. Pennock, and C. Ofria. On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, 15(3):346–367, 2011.
 Damianou and Lawrence [2013] A. Damianou and N. Lawrence. Deep gaussian processes. In Artificial Intelligence and Statistics, pages 207–215, 2013.
 Dayan et al. [1995] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The helmholtz machine. Neural computation, 7(5):889–904, 1995.
 Dinh et al. [2016] L. Dinh, J. SohlDickstein, and S. Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
 Finkel and Manning [2009] J. R. Finkel and C. D. Manning. Hierarchical bayesian domain adaptation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 602–610. Association for Computational Linguistics, 2009.
 Gauci and Stanley [2010] J. Gauci and K. O. Stanley. Autonomous evolution of topographic regularities in artificial neural networks. Neural computation, 22(7):1860–1898, 2010.
 Goodman et al. [2011] N. D. Goodman, T. D. Ullman, and J. B. Tenenbaum. Learning a theory of causality. Psychological review, 118(1):110, 2011.

HernándezLobato and Adams [2015]
J. M. HernándezLobato and R. Adams.
Probabilistic backpropagation for scalable learning of bayesian neural networks.
In International Conference on Machine Learning, pages 1861–1869, 2015. 
Hinton and Van Camp [1993]
G. E. Hinton and D. Van Camp.
Keeping the neural networks simple by minimizing the description
length of the weights.
In
Proceedings of the sixth annual conference on Computational learning theory
, pages 5–13. ACM, 1993.  Hinton et al. [1995] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The" wakesleep" algorithm for unsupervised neural networks. Science, 268(5214):1158–1161, 1995.

Joshi et al. [2016]
A. Joshi, S. Ghosh, M. Betke, and H. Pfister.
Hierarchical bayesian neural networks for personalized
classification.
In
Neural Information Processing Systems Workshop on Bayesian Deep Learning
, 2016.  Kingma and Welling [2014] D. P. Kingma and M. Welling. Stochastic gradient vb and the variational autoencoder. In Second International Conference on Learning Representations, ICLR, 2014.
 Kingma et al. [2015] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583, 2015.
 Krueger et al. [2017] D. Krueger, C.W. Huang, R. Islam, R. Turner, A. Lacoste, and A. Courville. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017.
 Lake et al. [2015] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Humanlevel concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
 Lake et al. [2018] B. M. Lake, N. D. Lawrence, and J. B. Tenenbaum. The emergence of organizing structure in conceptual representation. Cognitive science, 2018.
 Lakshminarayanan et al. [2017] B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pages 6405–6416, 2017.

Lawrence [2004]
N. D. Lawrence.
Gaussian process latent variable models for visualisation of high dimensional data.
In Advances in neural information processing systems, pages 329–336, 2004.  Liang et al. [2010] P. Liang, M. I. Jordan, and D. Klein. Learning programs: A hierarchical bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML10), pages 639–646, 2010.
 Louizos and Welling [2017] C. Louizos and M. Welling. Multiplicative normalizing flows for variational bayesian neural networks. arXiv preprint arXiv:1703.01961, 2017.

MacKay [1992a]
D. J. MacKay.
Bayesian interpolation.
Neural computation, 4(3):415–447, 1992a.  MacKay [1992b] D. J. MacKay. A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992b.
 MacKay [1995] D. J. MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 354(1):73–80, 1995.
 Neal [1996] R. M. Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, pages 29–53. Springer, 1996.
 Neal [2012] R. M. Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
 Papamakarios et al. [2017] G. Papamakarios, I. Murray, and T. Pavlakou. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pages 2335–2344, 2017.
 Pawlowski et al. [2017] N. Pawlowski, M. Rajchl, and B. Glocker. Implicit weight uncertainty in neural networks. arXiv preprint arXiv:1711.01297, 2017.
 Perov and Wood [2014] Y. N. Perov and F. D. Wood. Learning probabilistic programs. arXiv preprint arXiv:1407.2646, 2014.
 QuiñoneroCandela and Rasmussen [2005] J. QuiñoneroCandela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939–1959, 2005.
 Rezende and Mohamed [2015] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
 Rezende et al. [2014] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning, volume 2, 2014.
 Risi and Stanley [2012] S. Risi and K. O. Stanley. An enhanced hypercubebased encoding for evolving the placement, density, and connectivity of neurons. Artificial life, 18(4):331–363, 2012.
 Snelson and Ghahramani [2006] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudoinputs. In Advances in neural information processing systems, pages 1257–1264, 2006.
 Stanley [2007] K. O. Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines, 8(2):131–162, 2007.
 Titsias [2009] M. Titsias. Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pages 567–574, 2009.
 Titsias and Lawrence [2010] M. Titsias and N. D. Lawrence. Bayesian gaussian process latent variable model. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 844–851, 2010.
 Titsias and LázaroGredilla [2015] M. K. Titsias and M. LázaroGredilla. Local expectation gradients for black box variational inference. In Proceedings of the 28th International Conference on Neural Information Processing SystemsVolume 2, pages 2638–2646. MIT Press, 2015.