1 Introduction
Probabilistic approaches to machine learning involve modeling the probability distributions over large collections of variables. The number of parameters required to describe a general discrete distribution grows exponentially in its dimensionality, so some structure or regularity must be imposed, often through graphical models
[e.g. 1]. Graphical models are also used to describe probability densities over collections of realvalued variables.Often parts of a taskspecific probabilistic model are hard to specify, and are learned from data using generic models. For example, the natural probabilistic approach to image restoration tasks (such as denoising, deblurring, inpainting) requires a multivariate distribution over uncorrupted patches of pixels. It has long been appreciated that large classes of densities can be estimated consistently by kernel density estimation
[2], and a large mixture of Gaussians can closely represent any density. In practice, a parametric mixture of Gaussians seems to fit the distribution over patches of pixels and obtains stateoftheart restorations [3]. It may not be possible to fit small image patches significantly better, but alternative models could further test this claim. Moreover, competitive alternatives to mixture models might improve performance in other applications that have insufficient training data to fit mixture models well.Restricted Boltzmann Machines (RBMs), which are undirected graphical models, fit samples of binary vectors from a range of sources better than mixture models [4, 5]. One explanation is that RBMs form a distributed representation: many hidden units are active when explaining an observation, which is a better match to most real data than a single mixture component. Another explanation is that RBMs are mixture models, but the number of components is exponential in the number of hidden units. Parameter tying among components allows these more flexible models to generalize better from small numbers of examples. There are two practical difficulties with RBMs: the likelihood of the model must be approximated, and samples can only be drawn from the model approximately by Gibbs sampling. The Neural Autoregressive Distribution Estimator (NADE) overcomes these difficulties [5]
. NADE is a directed graphical model, or feedforward neural network, initially derived as an approximation to an RBM, but then fitted as a model in its own right.
In this work we introduce the Realvalued Autoregressive Density Estimator (RNADE), an extension of NADE. An autoregressive model expresses the density of a vector as an ordered product of onedimensional distributions, each conditioned on the values of previous dimensions in the (perhaps arbitrary) ordering. We use the parameter sharing previously introduced by NADE, combined with mixture density networks
[6], an existing flexible approach to modeling realvalued distributions with neural networks. By construction, the density of a test point under RNADE is cheap to compute, unlike RBMbased models. The neural network structure provides a flexible way to alter the mean and variance of a mixture component depending on context, potentially modeling nonlinear or heteroscedastic data with fewer components than unconstrained mixture models.
2 Background: Autoregressive models
Both NADE [5]
and our RNADE model are based on the chain rule (or product rule), which factorizes any distribution over a vector of variables into a product of terms:
, where denotes all attributes precedingin a fixed arbitrary ordering of the attributes. This factorization corresponds to a Bayesian network where every variable is a parent of all variables after it. As this model assumes no conditional independences, it says nothing about the distribution in itself. However, the (perhaps arbitrary) ordering we choose will matter if the form of the conditionals is constrained. If we assume tractable parametric forms for each of the conditional distributions, then the joint distribution can be computed for any vector, and the parameters of the model can be locally fitted to a penalized maximum likelihood objective using any gradientbased optimizer.
For binary data, each conditional distribution can be modeled with logistic regression, which is called a fully visible sigmoid belief network (FVSBN)
[7]. Neural networks can also be used for each binary prediction task [8]. The neural autoregressive distribution estimator (NADE) also uses neural networks for each conditional, but with parameter sharing inspired by a meanfield approximation to Restricted Boltzmann Machines [5]. In detail, each conditional is given by a feedforward neural network with one hidden layer, :(1) 
where , , , and are neural network parameters, and sigm
represents the logistic sigmoid function
.The weights between the inputs and the hidden units for each neural network are tied: is the first columns of a shared weight matrix . This parameter sharing reduces the total number of parameters from quadratic in the number of input dimensions to linear, lessening the need for regularisation. Computing the probability of a datapoint can also be done in time linear in dimensionality, , by sharing the computation when calculating the hidden activation of each neural network ():
(2) 
When approximating Restricted Boltzmann Machines, the output weights in (1) were originally tied to the input weights . Untying these weights gave better statistical performance on a range of tasks, with negligible extra computational cost [5].
NADE has recently been extended to count data [9]. The possibility of extending generic neural autoregressive models to continuous data has been mentioned [8, 10], but has not been previously explored to our knowledge. An autoregressive mixture of experts with scale mixture model experts has been developed as part of a sophisticated multiresolution model specifically for natural images [11]. In more general work, Gaussian processes have been used to model the conditional distributions of a fully visible Bayesian network [12]. However, these ‘Gaussian process networks’ cannot deal with multimodal conditional distributions or with large datasets (currently points would require further approximation). In the next section we propose a more flexible and scalable approach.
3 Realvalued neural autoregressive density estimators
The original derivation of NADE suggests deriving a realvalued version from a meanfield approximation to the conditionals of a GaussianRBM. However, we discarded this approach because the limitations of the GaussianRBM are well documented [13, 14]: its isotropic conditional noise model does not give competitive density estimates. Approximating a more capable RBM model, such as the meancovariance RBM [15] or the spikeandslab RBM [16], might be a fruitful future direction.
The main characteristic of NADE is the tying of its inputtohidden weights. The output layer was ‘untied’ from the approximation to the RBM to give the model greater flexibility. Taking this idea further, we add more parameters to NADE to represent each onedimensional conditional distribution with a mixture of Gaussians instead of a Bernoulli distribution. That is, the outputs are mixture density networks
[6], with a shared hidden layer, using the same parameter tying as NADE.Thus, our Realvalued Neural Autoregressive DensityEstimator or RNADE model represents the probability density of a vector as:
(3) 
where is a mixture of Gaussians with parameters . The mixture model parameters are calculated using a neural network with all of the preceding dimensions, , as inputs. We now give the details.
RNADE computes the same hidden unit activations, , as before using (2). As discussed by Bengio [10], as an RNADE (or a NADE) with sigmoidal units progresses across the input dimensions , its hidden units will tend to become more and more saturated, due to their input being a weighted sum of an increasing number of inputs. Bengio proposed alleviating this effect by rescaling the hidden units’ activation by a free factor at each step, making the hidden unit values
(4) 
Learning these extra rescaling parameters worked slightly better, and all of our experiments use them.
Previous work on neural networks with realvalued outputs has found that rectified linear units can work better than sigmoidal nonlinearities
[17]. The hidden values for rectified linear units are:(5) 
In preliminary experiments we found that these hidden units worked better than sigmoidal units in RNADE, and used them throughout (except for an example result with sigmoidal units in Table 2).
Finally, the mixture of Gaussians parameters for the th conditional, , are set by:
mixing fractions,  (6)  
component means,  (7)  
component standard deviations, 
(8) 
where free parameters , , are matrices, and , , are vectors of size . The softmax [18] ensures the mixing fractions are positive and sum to one, the exponential ensures the standard deviations are positive.
Fitting an RNADE can be done using gradient ascent on the model’s likelihood given a training set of examples. We used minibatch stochastic gradient ascent in all our experiments. In those RNADE models with MoG conditionals, we multiplied the gradient of each component mean by its standard deviation (for a Gaussian, Newton’s method multiplies the gradient by its variance, but empirically multiplying by the standard deviation worked better). This gradient scaling makes tight components move more slowly than broad ones, a heuristic that we found allows the use of higher learning rates.
Variants:
Using a mixture of Gaussians to represent the conditional distributions in RNADE is an arbitrary parametric choice. Given several components, the mixture model can represent a rich set of skewed and multimodal distributions with different tail behaviors. However, other choices could be appropriate in particular circumstances. For example, work on natural images often uses scale mixtures, where components share a common mean. Conditional distributions of perceptual data are often assumed to be Laplacian
[e.g. 19]. We call our main variant with mixtures of Gaussians RNADEMoG, but also experiment with mixtures of Laplacian outputs, RNADEMoL.4 Experiments
We compared RNADE to mixtures of Gaussians (MoG) and factor analyzers (MFA), which are surprisingly strong baselines in some tasks [20, 21]. Given the known poor performance of discrete mixtures [4, 5], we limited our experiments to modeling continuous attributes. However it would be easy to include both discrete and continuous variables in a NADElike architecture.
4.1 Lowdimensional data
We first considered five UCI datasets [22], previously used to study the performance of other density estimators [23, 20]. These datasets have relatively low dimensionality, with between 10 and 32 attributes, but have hard thresholds and nonlinear dependencies that may make it difficult to fit mixtures of Gaussians or factor analyzers.
Dataset  dim  size  Gaussian  MFA  FVBN  RNADEMoG  RNADEMoL 

Red wine  11  1599  
White wine  11  4898  
Parkinsons  15  5875  
Ionosphere  32  351  
Boston housing  10  506 
Following Tang et al. [20], we eliminated discretevalued attributes and an attribute from every pair with a Pearson correlation coefficient greater than 0.98. Each dimension of the data was normalized by subtracting its training subset sample mean and dividing by its standard deviation. All results are reported on the normalized data.
As baselines we fitted fullcovariance Gaussians and mixtures of factor analysers. To measure the performance of the different models, we calculated their loglikelihood on heldout test data. Because these datasets are small, we used 10folds, with 90% of the data for training, and 10% for testing.
We chose the hyperparameter values for each model by doing perfold crossvalidation; using a ninth of the training data as validation data. Once the hyperparameter values had been chosen, we trained each model using all the training data (including the validation data) and measured its performance on the 10% of heldout testing data. In order to avoid overfitting, we stopped the training after reaching a training likelihood higher than the one obtained on the best validationwise iteration of the corresponding validation run. Early stopping is crucial to avoid overfitting the RNADE models. It also improves the results of the MFAs, but to a lesser degree.
The MFA models were trained using the EM algorithm [24, 25], the number of components and factors were crossvalidated. The number of factors was chosen from even numbers from , where selecting gives a mixture of Gaussians. The number of components was chosen among all even numbers from (crossvalidation always selected fewer than components).
RNADEMoG and RNADEMoL models were fitted using minibatch stochastic gradient descent, using minibatches of size 100, for 500 epochs, each epoch comprising 10 minibatches. For each experiment, the number of hidden units (50), the nonlinear activationfunction of the hidden units (RLU), and the form of the conditionals were fixed. Three hyperparameters were crossvalidated using gridsearch: the number of components on each onedimensional conditional was chosen from the set
; the weightdecay (used only to regularize the input to hidden weights) from the set ; and the learning rate from the set . Learningrates were decreased linearly to reach 0 after the last epoch.We also trained fullyvisible Bayesian networks (FVBN), an autoregressive model where each onedimensional conditional is modelled by a separate mixture density network using no parameter tying. The same crossvalidation procedure and hyperparameters as for RNADE training were used. The best validationwise MDN for each onedimensional conditional was chosen.
The results are shown in Table 1. Autoregressive methods obtained statistical performances superior to mixture models on all datasets. An RNADE with mixture of Gaussian conditionals was among the statistically significant group of best models on all datasets. Unfortunately we could not reproduce the datafolds used by previous work, however, our improvements are larger than those demonstrated by a deep mixture of factor analyzers over standard MFA [20].
4.2 Natural image patches
We also measured the ability of RNADE to model small patches of natural images. Following the recent work of Zoran and Weiss [3], we use 8by8pixel patches of monochrome natural images, obtained from the BSDS300 dataset [26] (Figure 1 gives examples).
Pixels in this dataset can take a finite number of brightness values ranging from to
. Modeling discretized data using a realvalued distribution can lead to arbitrarily high density values, by locating narrow high density spike on each of the possible discrete values. In order to avoid this ‘cheating’ solution, we added noise uniformly distributed between
and to the value of each pixel. We then divided by , making each pixel take a value in the range .In previous experiments, Zoran and Weiss [3] subtracted the mean pixel value from each patch, reducing the dimensionality of the data by one: the value of any pixel could be perfectly predicted as minus the sum of all other pixel values. However, the original study still used a mixture of fullcovariance 64dimensional Gaussians. Such a model could obtain arbitrarily high model likelihoods, so unfortunately the likelihoods reported in previous work on this dataset [3, 20] are difficult to interpret. In our preliminary experiment using RNADE, we observed that if we model the 64dimensional data, the 64th pixel is always predicted by a very thin spike centered at its true value. The ability of RNADE to capture this spurious dependency is reassuring, but we wouldn’t want our results to be dominated by it. Recent work by Zoran and Weiss [21]
, projects the data on the leading 63 eigenvectors of each component, when measuring the model likelihood
[27]. For comparison amongst a range of methods, we advocate simply discarding the 64th (bottomright) pixel.We trained our model using patches drawn randomly from 180 images in the training subset of BSDS300. A validation dataset containing 1,000 random patches from the remaining 20 images in the training subset were used for earlystopping when training RNADE. We measured the performance of each model by measuring their loglikelihood on one million patches drawn randomly from the test subset, which is composed of 100 images not present in the training subset. Given the larger scale of this dataset, hyperparameters of the RNADE and MoG models were chosen manually using the performance of preliminary runs on the validation data, rather than by an extensive search.
The RNADE model had 512 rectifiedlinear hidden units and a mixture of 20 onedimensional Gaussian components per output. Training was done by minibatch gradient descent, with 25 datapoints per minibatch, for a total of 200 epochs, each comprising 1,000 minibatches. The learningrate was scheduled to start at 0.001 and linearly decreased to reach 0 after the last epoch. Gradient momentum with momentum factor 0.9 was used, but initiated at the beginning of the second epoch. A weight decay rate of 0.001 was applied to the inputtohidden weight matrix only. Again, we found that multiplying the gradient of the mean output parameters by the standard deviation improves results. RNADE training was early stopped but didn’t show signs of overfitting. We produced a further run with 1024 hidden units for 400 epochs, with still no signs of overfitting; even larger models might perform better.
The MoG model was trained using minibatch EM, for 1,000 iterations. At each iteration 20,000 randomly sampled datapoints were used in an EM update. A step was taken from the previous mixture model towards the parameters resulting from the Mstep: , where the step size () was scheduled to start at 0.1 and linearly decreased to reach 0 after the last update. The training of the MoG was also earlystopped and also showed no signs of overfitting.
The results are shown in Table 2. We compare RNADE with a mixtures of Gaussians model trained on 63 pixels, and with a MoG trained by Zoran and Weiss (downloaded from Daniel Zoran’s website) from which we removed the 64th row and column of each covariance matrix. The best RNADE test loglikelihood is, on average, 0.7 nats per patch lower than Zoran and Weiss’s MoG, which had a different training procedure than our mixture of Gaussians.
Model  Training LogL  Test LogL 

MoG (Z&W)  152.8  
MoG  144.7  
MoG  150.4  
MoG  150.4  
RNADEMoG  149.1  
RNADEMoG  151.0  
RNADEMoG  149.7  
RNADEMoL  141.5  
RNADEMoL  141.1  
RNADEMoL  141.5  
RNADEMoG (sigmoid h. units)  146.4  
RNADEMoG (1024 units, 400 epochs)  152.1 
Average perexample loglikelihood of several mixture of Gaussian and RNADE models, with mixture of Gaussian (MoG) or mixture of Laplace (MoL) conditionals, on 8by8 patches of natural images. These results are measured in nats and were calculated using one million patches. Standard errors due to the finite test sample size are lower than
in every case. gives the number of onedimensional components for each conditional in RNADE, and the number of fullcovariance components for MoG.Figure 1 shows a few examples from the test set, and samples from the MoG and RNADE models. Some of the samples from RNADE are unnaturally noisy, with pixel values outside the legal range (see fourth sample from the right in Figure 1). If we constrain the pixels values to a unit range, by rejection sampling or otherwise, these artifacts go away. Limiting the output range of the model would also improve test likelihood scores slightly, but not by much: loglikelihood does not strongly penalize models for putting a small fraction of probability mass on ‘junk’ images.
All of the results in this section were obtained by fitting the pixels in a rasterscan order. Perhaps surprisingly, but consistent with previous results on NADE [5] and by Frey [28], randomizing the order of the pixels made little difference to these results. The difference in performance was comparable to the differences between multiple runs with the same pixel ordering.
4.3 Speech acoustics
We also measured the ability of RNADE to model small patches of speech spectrograms, extracted from the TIMIT dataset [29]. The patches contained 11 frames of 20 filterbanks plus energy; totaling 231 dimensions per datapoint. These filterbank encoding is common in speechrecognition, and better for visualization than the more frequently used MFCC features. A good generative model of speech could be used, for example, in denoising, or speech detection tasks.
We fitted the models using the standard TIMIT training subset, and compared RNADE with a MoG by measuring their loglikelihood on the complete TIMIT coretest dataset.
The RNADE model has 1024 rectifiedlinear hidden units and a mixture of 20 onedimensional Gaussian components per output. Given the larger scale of this dataset hyperparameter choices were again made manually using validation data, and the same minibatch training procedures for RNADE and MoG were used as for natural image patches.
The results are shown in Table 3. RNADE obtained, on average, 10 nats more per test example than a mixture of Gaussians. In Figure 2 a few examples from the test set, and samples from the MoG and RNADE models are shown. In contrast with the loglikelihood measure, there are no marked differences between the samples from each model. Both set of samples look like blurred spectrograms, but RNADE seems to capture sharper formant structures (peaks of energy at the lower frequency bands characteristic of vowel sounds).
Model  Training LogL  Test LogL 

MoG  111.6  110.4 
MoG  113.4  112.0 
MoG  113.9  112.5 
MoG  114.1  112.5 
RNADEMoG  125.9  123.9 
RNADEMoG  126.7  124.5 
RNADEMoL  120.3  118.0 
RNADEMoL  122.2  119.8 
5 Discussion
Mixture Density Networks (MDNs) [6] are a flexible conditional model of probability densities, that can capture skewed, heavytailed, and multimodal distributions. In principle, MDNs can be applied to multidimensional data. However, the number of parameters that the network has to output grows quadratically with the number of targets, unless the targets are assumed independent. RNADE exploits an autoregressive framework to apply practical, onedimensional MDNs to unsupervised density estimation.
To specify an RNADE we needed to set the parametric form for the output distribution of each MDN. A sufficiently large mixture of Gaussians can closely represent any density, but it is hard to learn the conditional densities found in some problems with this representation. The marginal for the brightness of a pixel in natural image patches is heavy tailed, closer to a Laplace distribution than Gaussian. Therefore, RNADEMoG must fit predictions of the first pixel, , with several Gaussians of different widths, that coincidentally have zero mean. This solution can be difficult to fit, and RNADE with a mixture of Laplace outputs predicted the first pixel of image patches better than with a mixture of Gaussians (Figure 3b and c). However, later pixels were predicted better with Gaussian outputs (Figure 3f); the mixture of Laplace model is not suitable for predicting with large contexts. For image patches, a scale mixture can work well [11], and could be explored within our framework. However for general applications, scale mixtures within RNADE would be too restrictive (e.g., would be zeromean and unimodal). More flexible onedimensional forms may aid RNADE to generalize better for different context sizes and across a range of applications.
One of the main drawbacks of RNADE, and of neural networks in general, is the need to decide the value of several training hyperparameters. The gradient descent learning rate can be adjusted automatically using, for example, the techniques developed by Schaul et al. [30]. Also, methods for choosing hyperparameters more efficiently than grid search have been recently developed [31, 32]. These, and several other recent improvements in the neural network field, like dropouts [33], should be directly applicable to RNADE, and possibly obtain even better performance than shown in this work. RNADE makes it relatively straightforward to translate advances in the neuralnetwork field into better density estimators, or at least into new estimators with different inductive biases.
In summary, we have presented RNADE, a novel ‘blackbox’ density estimator. Both likelihood computation time and the number of parameters scale linearly with the dataset dimensionality. Generalization across a range of tasks, representing arbitrary feature vectors, image patches, and auditory spectrograms is excellent. Performance on image patches was close to a recently reported stateoftheart mixture model [3], and RNADE outperformed mixture models on all other datasets considered.
Acknowledgments
We thank XXX, YYY and ZZZ for useful discussions.
Appendix A Implementation details
In this appendix we provide pseudocode for the calculation of densities and learning gradients. No new material is presented. A Python implementation of RNADE is available from http://www.benignouria.com/en/research/RNADE.
In Algorithm 1 we detail the pseudocode for calculating the density of a datapoint under an RNADE with mixture of Gaussian conditionals. The model has parameters: , , , , , , , ,
Training of an RNADE model can be done using a gradient ascent algorithm on the loglikelihood of the model given the training data. Gradients can be calculated using automatic differentiation libraries (e.g. Theano
[34]). However we found our manual implementation to work faster in practice, possibly due to our recomputation of the terms in the second for loop in Algorithm 2, which is more cachefriendly than storing them during the first loop.Here we show the derivation of the gradients of each parameter of a NADE model with MoG conditionals. Following [6], we define as the density of under the th component of the conditional:
(9) 
and as the “responsibility” of the th component for :
(10) 
It is easy to find just by taking their derivatives that:
(11)  
(12)  
(13) 
Using the chain rule we can calculate the derivative of the parameters of the output layer parameters:
(14)  
(15)  
(16)  
(17)  
(18)  
(19) 
By “backpropagating” the we can calculate the partial derivatives with respect to the output of the hidden units:
(20)  
(21) 
and calculate the partial derivatives with respect to all other parameters in RNADE:
(22)  
(23)  
(24)  
(25)  
(26) 
Note that gradients are calculated recursively, due to (24), starting at and progressing down to .
References
 Koller and Friedman [2009] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT Press, 2009.
 Cacoullos [1966] T. Cacoullos. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics, 18(1):179–189, 1966.

Zoran and Weiss [2011]
D. Zoran and Y. Weiss.
From learning models of natural image patches to whole image
restoration.
In
International Conference on Computer Vision
, pages 479–486. IEEE, 2011. 
Salakhutdinov and Murray [2008]
R. Salakhutdinov and I. Murray.
On the quantitative analysis of deep belief networks.
In Proceedings of the 25th International Conference on Machine learning, pages 872–879. Omnipress, 2008.  Larochelle and Murray [2011] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. Journal of Machine Learning Research W&CP, 15:29–37, 2011.
 Bishop [1994] C. M. Bishop. Mixture density networks. Technical Report NCRG 4288, Neural Computing Research Group, Aston University, Birmingham, 1994.
 Frey et al. [1996] B. J. Frey, G. E. Hinton, and P. Dayan. Does the wakesleep algorithm produce good density estimators? In Advances in Neural Information Processing Systems 8, pages 661–670. MIT Press, 1996.
 Bengio and Bengio [2000] Y. Bengio and S. Bengio. Modeling highdimensional discrete data with multilayer neural networks. Advances in Neural Information Processing Systems, 12:400–406, 2000.
 Larochelle and Lauly [2012] H. Larochelle and S. Lauly. A neural autoregressive topic model. In Advances in Neural Information Processing Systems 25, 2012.
 Bengio [2011] Y. Bengio. Discussion of “the neural autoregressive distribution estimator”. Journal of Machine Learning Research W&CP, 15:38–39, 2011.
 Theis et al. [2012] L. Theis, R. Hosseini, and M. Bethge. Mixtures of conditional Gaussian scale mixtures applied to multiscale image representations. PLoS ONE, 7(7), 2012. doi: 10.1371/journal.pone.0039857.

Friedman and Nachman [2000]
N. Friedman and I. Nachman.
Gaussian process networks.
In
Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence
, pages 211–219. Morgan Kaufmann Publishers Inc., 2000.  Murray and Salakhutdinov [2009] I. Murray and R. Salakhutdinov. Evaluating probabilities under highdimensional latent variable models. In Advances in Neural Information Processing Systems 21, pages 1137–1144, 2009.
 Theis et al. [2011] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge. In all likelihood, deep belief is not enough. Journal of Machine Learning Research, 12:3071–3096, 2011.

Ranzato and Hinton [2010]
M. A. Ranzato and G. E. Hinton.
Modeling pixel means and covariances using factorized thirdorder
Boltzmann machines.
In
Computer Vision and Pattern Recognition
, pages 2551–2558. IEEE, 2010.  Courville et al. [2011] A. Courville, J. Bergstra, and Y. Bengio. A spike and slab restricted Boltzmann machine. Journal of Machine Learning Research, W&CP, 15, 2011.
 Nair and Hinton [2010] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, pages 807–814. Omnipress, 2010.
 Bridle [1989] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing: algorithms, architectures and applications, pages 227–236. SpringerVerlag, 1989.
 Robinson [1994] T. Robinson. SHORTEN: simple lossless and nearlossless waveform compression. Technical Report CUED/FINFENG/TR.156, Engineering Department, Cambridge University, 1994.
 Tang et al. [2012] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep mixtures of factor analysers. In Proceedings of the 29th International Conference on Machine Learning, pages 505–512. Omnipress, 2012.
 Zoran and Weiss [2012] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. Advances in Neural Information Processing Systems, 25:1745–1753, 2012.
 Bache and Lichman [2013] K. Bache and M. Lichman. UCI machine learning repository, 2013. http://archive.ics.uci.edu/ml.
 Silva et al. [2011] R. Silva, C. Blundell, and Y. W. Teh. Mixed cumulative distribution networks. Journal of Machine Learning Research W&CP, 15:670–678, 2011.
 Ghahramani and Hinton [1996] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRGTR961, University of Toronto, 1996.
 Verbeek [2005] J. Verbeek. Mixture of factor analyzers Matlab implementation, 2005. http://lear.inrialpes.fr/ verbeek/code/.
 Martin et al. [2001] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In International Conference on Computer Vision, volume 2, pages 416–423. IEEE, July 2001.
 Zoran [2013] D. Zoran. Personal communication, 2013.
 Frey [1998] B. Frey. Graphical models for machine learning and digital communication. MIT Press, 1998.
 Garofolo et al. [1993] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, and V. Zue. Timit acousticphonetic continuous speech corpus. Linguistic Data Consortium, 10(5):0, 1993.
 Schaul et al. [2013] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In Proceedings of the 30th international conference on Machine learning, 2013.
 Bergstra and Bengio [2012] J. Bergstra and Y. Bengio. Random search for hyperparameter optimization. The Journal of Machine Learning Research, 13:281–305, 2012.
 Snoek et al. [2012] J. Snoek, H. Larochelle, and R. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems 25, pages 2960–2968, 2012.
 Hinton et al. [2012] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing coadaptation of feature detectors. Arxiv preprint arXiv:1207.0580, 2012.
 Bergstra et al. [2010] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David WardeFarley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.