Implicit distribution (ID) representation has lately been a subject of much interest. IDs govern variables that result from linear or non-linear transformations applied to another set of (standard) random variates. Through successive transformations, IDs can be equipped with enough capacity to represent functions of arbitrary complexity (i.e., high-dimensionality, multimodality, etc.). This has seen IDs become mainstream in approximate probabilistic inference in deep learning, where they have been extensively applied to approximate intractable posterior distributions over latent variables and model parameters conditioned on data(Kingma and Welling, 2013; Rezende and Mohamed, 2015; Liu and Wang, 2016; Wang and Liu, 2016; Ranganath et al., 2016; Mescheder et al., 2017; Tran et al., 2017; Huszár, 2017, among many others)
. Although the probability density function of an ID may not be accessible (e.g., due to non-invertible mappings), the reparameterization trickKingma and Welling (2013) still allows for an efficient and unbiased means for sampling it. This makes IDs amenable to (gradient-based) stochastic optimization regimes.
In this work we employ IDs as hypernetworks. The idea behind a hypernetwork (Figure 1) is to have a neural network that outputs the weights of another main
neural network. The main network is then applied for standard tasks such as regression or classification. We optimize the parameters of the hypernetwork using stochastic gradient descent (SGD e.g.Kingma and Ba (2014)) to maximize the (marginalized) conditional likelihood of targets given input. In empirical analysis on various regression and classification benchmarks, we find our approach to be in general better than standard maximum likelihood learning, while it also performs competitively with respect to a number of approximate Bayesian deep learning methodologies.
2 Related Work
Our optimization framework is not fully Bayesian; however instead of maintaining point parameter estimates as in standard maximum likelihood optimization of neural networks, our approach optimizes an ID that governs the parameters of a neural network. In contrast to Bayesian Neural Networks (BNNs)(Neal, 1996; MacKay, 1995; Welling and Teh, 2011; Graves, 2011; Hernandez-Lobato and Adams, 2015; Li and Gal, 2017, among others), where the target is to approximate the posterior distribution of neural network parameters (e.g., using variational inference Graves (2011)), we deploy and tune the parameter distribution while directly optimizing for the objective of the main neural network. Hence our approach is much akin to gradient-based evolutionary optimization Sun et al. (2012).
where the complete model was trained via backpropagation, like in this paper, and used efficiently compressed weights to reduce the total size of the network.
2.2 Bayesian Neural Networks
BNNs can be classified into two main methods, either using MCMCWelling and Teh (2011) or learning an approximate posterior using stochastic variational inference (VI) Graves (2011), expectation propagation Hernandez-Lobato and Adams (2015) or -divergence Li and Gal (2017). In the VI setting one can interpret dropout Gal and Ghahramani (2015) as a VI method allowing cheap samples from but resulting in a unimodal approximate posterior. Bayes by Backprop Blundell et al. (2015) can also be viewed as a simple Bayesian hypernet, where the hypernetwork only performs element wise shift and scale of the noise. To solve the issue of scaling Louizos and Welling (2017)
propose a hypernet that generates scaling factors on the means of a factorial Gaussian distribution allowing for a highly flexible approximate posterior, but requiring an auxiliary inference network for the entropy term of the variational lower bound. Finally,Krueger et al. (2017) is closely related to this work, where a hypernetwork learns to transform a noise distribution to a distribution of weights of a target network, and is trained via VI.
Given training data containing , we consider the task of maximizing the marginal conditional likelihood of targets given :
where the approximation allows us to estimate the intractable integral by Monte Carlo sampling. While the logarithm of in (1
) can be taken to be proportional to a loss functiondefined between the ground-truth and the output of a deep neural network , (with
) can be chosen to have any parametric form, such as fully factorized Gaussian distributions across each dimension of the parameter vector. Such non-structured distributions would however be undesired for not allowing dependencies among dimensions of . Also from a sampling point of view it would be inefficient, in particular in high-dimensional cases. We therefore define an implicit distribution such that it has a capacity to model arbitrary dependency structures among , while by means of the reparameterization trick Kingma and Welling (2013), it also allows for cheap sampling by constructing as:
where is a hypernetwork that gets activated by a latent noise variable drawn from a standard distribution
, such as a Gaussian or uniform distribution. Given (2) and the main network , we can write down the following function as an approximation to the logarithm of (1):
Since we draw one parameter sample per data point, we have dropped the summation over in (3). Given we can optimize (3) w.r.t. by applying gradient-based optimization schemes such as SGD. To predict given an input we have:
The objective function (3) is a stochastic relaxation of standard maximum likelihood optimization. It encourages the hypernetwork to mainly reproduce outputs for which the conditional likelihood was found to be maximal. This implies that may eventually either converge to a delta peak at a (local) optimum or have its mass distributed across equally (sub)optimal regions, which in principle can coincide with the solution found by plain maximum likelihood (i.e., standard gradient descent techniques). In practice we however observe that our approach finds solutions that are generally better than standard gradient descent and are on par with more sophisticated Bayesian deep learning methodologies (see Sec. 4 for more details). This may be due to the fact that we start with a large population of solutions that shrinks down gradually as continues to concentrate its mass to the region(s) of most promising solutions seen so far. This can be seen as akin to gradient-based efficient evolution strategies Sun et al. (2012).
We validate our approach in accordance with relevant literature on standard regression tasks on UCI datasets as well as MNIST classification. We use ReLU non-linearity in all our experiments for hidden layers in both hyper and main networks. We use dropout probability of 0.5 for the hidden units of hypernetworks during training. We use a softplus layer to scale (initially) the input to the hypernetwork in (2). For both our approach and standard SGD results reported below, we add Gaussian noise to input in (3), with its scale (initially ) tuned during end to end training as the output of a softplus unit.
For the regression experiments we follow the process as in Hernandez-Lobato and Adams (2015): we randomly keep 10% of the data for testing and the rest for training. Following the earlier works Hernandez-Lobato and Adams (2015); Louizos and Welling (2016)
, the main network architecture consists of a single hidden layer with 50 units (or 100 for the Protein and Year). For datasets with more than 5,000 data points (Kin8nm, Naval, Power, Protein and Year) we switch from mini-batches of size 1 to 512 (not fine-tuned). We run a total of 2000 epochs per repetition. As this number is not fine-tuned, it could cause overfitting in smaller datasets like Boston. Early stopping in such cases might improve the results.
We compare the results of our approach with the results of other state of the art methods, as well as plain SGD trained on the same target network
without employing a hypernetwork. For each task we output the mean RMSE over the last 20 epochs and then repeat the task 20 times to report the mean of these RMSEs and their standard errors in Table1 for our method and SGD. It can be seen that in most cases the results are competitive, and often exceed the state of the art. In Figure 2 we further plot how the distribution over weights as represented by the hypernetwork evolves over training epochs for three (out of ) independent repetitions of three of the regression benchmarks. While we observe a consistent diffusion behavior across multiple repetitions on a given dataset, we see that for different datasets the diffusion trajectories leading to convergence (to a delta peak) can vary notably. In the plots we also overlay the evolution of the average test RMSE for a comparison.
|Boston||506||13||4.320.29||3.010.18||2.970.85||2.700.13||3.73 0.67||3.72 1.05|
|Concrete||1,030||8||7.190.12||5.670.09||5.23 0.53||4.890.12||5.29 0.87||4.74 0.64|
|Energy||768||8||2.650.08||1.800.05||1.660.19||0.540.02||0.95 0.13||0.87 0.10|
|Kin8nm||8,192||8||0.100.00||0.100.00||0.100.00||0.080.00||0.08 0.00||0.08 0.00|
|Naval||11,934||16||0.010.00||0.010.00||0.010.00||0.000.00||0.00 0.00||0.00 0.00|
|Power Plant||9,568||4||4.330.04||4.120.03||4.020.18||4.040.04||4.06 0.25||4.02 0.18|
|Protein||45,730||9||4.840.03||4.730.01||4.360.04||4.130.02||4.37 0.03||4.65 0.19|
|Wine||1,599||11||0.650.01||0.640.01||0.620.04||0.630.01||0.80 0.05||0.62 0.04|
|Yacht||308||6||6.890.67||1.020.05||1.110.38||0.710.05||0.77 0.25||0.57 0.21|
|Year||515,345||90||9.03NA||8.87NA||8.84NA||8.78NA||8.74 0.03||8.74 0.03|
For the classification task, we used MNIST to train with varying numbers of layers and hidden units per layer, as in Table 2 for 2,000 epochs. We use a mini-batch size of 256. The hypernetwork for the 2400 and 2800 cases consist of 2 hidden layers of size 100 and 50 and for the 3150 we have 2 hidden layers of 100 and 20. The error rate reported is calculated by taking the means of test errors from the last 20 epochs for our method and plain SGD.
|Arch.||Max. Likel.||DropConnect||Bayes B. SM||Var. Dropout||VMG||SGD||Our|
We have presented a simple stochastic optimization approach for deep neural networks. The method employs implicit distributions as hypernetworks to model arbitrary dependencies among parameters of the main network. Despite being not fully Bayesian, our approach aims to model a distribution over the parameters of a neural network, as opposed to maintaining point weight estimates as in standard maximum likelihood optimization of neural networks. In general we empirically outperform standard gradient descent optimization and demonstrate on par performance in a broader comparison with state of the art Bayesian methodologies on regression and classification tasks.
In the future we would like to focus on the scalability of our approach (e.g., through layer coupling Krueger et al. (2017)) as well as on a fully Bayesian extension of our optimization procedure.
et al. (2015)
Blundell, C., J. Cornebise, K. Kavukcuoglu, and
2015. Weight uncertainty in neural networks. In
Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, Pp. 1613–1622. JMLR.org.
Gal, Y. and Z. Ghahramani
2015. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ArXiv e-prints.
2011. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, eds., Pp. 2348–2356. Curran Associates, Inc.
Ha et al. (2016)
Ha, D., A. Dai, and Q. V. Le
2016. HyperNetworks. ArXiv e-prints.
Hernandez-Lobato, J. M. and R. Adams
2015. Probabilistic backpropagation for scalable learning of bayesian neural networks. In Proceedings of the 32nd International Conference on Machine Learning, F. Bach and D. Blei, eds., volume 37 of Proceedings of Machine Learning Research, Pp. 1861–1869, Lille, France. PMLR.
2017. Variational inference using implicit distributions. CoRR, abs/1702.08235.
Kingma, D. P. and J. Ba
2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Kingma et al. (2015)
Kingma, D. P., T. Salimans, and M. Welling
2015. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds., Pp. 2575–2583. Curran Associates, Inc.
Kingma, D. P. and M. Welling
2013. Auto-encoding variational bayes. CoRR, abs/1312.6114.
Krueger et al. (2017)
Krueger, D., C.-W. Huang, R. Islam, R. Turner, A. Lacoste, and
2017. Bayesian Hypernetworks. ArXiv e-prints.
Li and Gal (2017)
Li, Y. and Y. Gal
2017. Dropout inference in Bayesian neural networks with alpha-divergences. In Proceedings of the 34th International Conference on Machine Learning, D. Precup and Y. W. Teh, eds., volume 70 of Proceedings of Machine Learning Research, Pp. 2052–2061, International Convention Centre, Sydney, Australia. PMLR.
Liu and Wang (2016)
Liu, Q. and D. Wang
Stein variational gradient descent: A general purpose bayesian inference algorithm.In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, Pp. 2370–2378.
Louizos, C. and M. Welling
2016. Structured and efficient variational deep learning with matrix gaussian posteriors. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, Pp. 1708–1716. JMLR.org.
Louizos, C. and M. Welling
2017. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. ArXiv e-prints.
MacKay, D. J.
1995. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 354(1):73 – 80. Proceedings of the Third Workshop on Neutron Scattering Data Analysis.
Mescheder et al. (2017)
Mescheder, L., S. Nowozin, and A. Geiger
Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks.In International Conference on Machine Learning (ICML) 2017.
Neal, R. M.
1996. Bayesian Learning for Neural Networks. Secaucus, NJ, USA: Springer-Verlag New York, Inc.
et al. (2016)
Ranganath, R., D. Tran, J. Altosaar, and D. M.
2016. Operator variational inference. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, Pp. 496–504.
Rezende, D. J. and S. Mohamed
2015. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, Pp. 1530–1538.
1992. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Comput., 4(1):131–139.
et al. (2003)
Simard, P. Y., D. Steinkraus, and J. C. Platt
Best practices for convolutional neural networks applied to visual document analysis.In Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2, ICDAR ’03, Pp. 958–, Washington, DC, USA. IEEE Computer Society.
Sun et al. (2012)
Sun, Y., D. Wierstra, T. Schaul, and
2012. Efficient natural evolution strategies. CoRR, abs/1209.5853.
Tran et al. (2017)
Tran, D., R. Ranganath, and D. M. Blei
2017. Deep and hierarchical implicit models. CoRR, abs/1702.08896.
Wan et al. (2013)
Wan, L., M. Zeiler, S. Zhang, Y. L. Cun, and
2013. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), S. Dasgupta and D. Mcallester, eds., volume 28, Pp. 1058–1066. JMLR Workshop and Conference Proceedings.
Wang and Liu (2016)
Wang, D. and Q. Liu
2016. Learning to draw samples: With application to amortized MLE for generative adversarial learning. CoRR, abs/1611.01722.
Welling and Teh (2011)
Welling, M. and Y. W. Teh
2011. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proceedings of the 28th International Conference on Machine Learning, Pp. 681–688. Omnipress.