DeepAI
Log In Sign Up

RBM-Flow and D-Flow: Invertible Flows with Discrete Energy Base Spaces

12/24/2020
by   Daniel O'Connor, et al.
0

Efficient sampling of complex data distributions can be achieved using trained invertible flows (IF), where the model distribution is generated by pushing a simple base distribution through multiple non-linear bijective transformations. However, the iterative nature of the transformations in IFs can limit the approximation to the target distribution. In this paper we seek to mitigate this by implementing RBM-Flow, an IF model whose base distribution is a Restricted Boltzmann Machine (RBM) with a continuous smoothing applied. We show that by using RBM-Flow we are able to improve the quality of samples generated, quantified by the Inception Scores (IS) and Frechet Inception Distance (FID), over baseline models with the same IF transformations, but with less expressive base distributions. Furthermore, we also obtain D-Flow, an IF model with uncorrelated discrete latent variables. We show that D-Flow achieves similar likelihoods and FID/IS scores to those of a typical IF with Gaussian base variables, but with the additional benefit that global features are meaningfully encoded as discrete labels in the latent space.

READ FULL TEXT VIEW PDF

page 7

page 8

01/29/2019

Latent Normalizing Flows for Discrete Sequences

Normalizing flows have been shown to be a powerful class of generative m...
10/22/2020

Principled Interpolation in Normalizing Flows

Generative models based on normalizing flows are very successful in mode...
03/18/2019

A RAD approach to deep mixture models

Flow based models such as Real NVP are an extremely powerful approach to...
12/23/2014

Learning Non-deterministic Representations with Energy-based Ensembles

The goal of a generative model is to capture the distribution underlying...
07/15/2021

Copula-Based Normalizing Flows

Normalizing flows, which learn a distribution by transforming the data t...
02/06/2020

Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow

Flow models have recently made great progress at modeling quantized sens...
07/11/2017

Stable Unitary Integrators for the Numerical Implementation of Continuous Unitary Transformations

The technique of continuous unitary transformations has recently been us...

1 Introduction

Using generative models to efficiently and accurately represent high dimensional data has been widely explored with generative adversarial networks (GANs)

[goodfellow_generative_2014], variational auto-encoders (VAEs) [kingma_auto-encoding_2014], energy-based models (EBM) [du_implicit_2019, lecun2006tutorial], diffusion models [sohl-dickstein_deep_2015, ho_denoising_2020]

, autoregressive models

[pmlr-v15-larochelle11a, pmlr-v48-oord16, chen_pixelsnail_2017, parmar_image_2018], non-autoregressive invertible flows [dinh_nice_2015, dinh_density_2017, kingma_glow_2018, ho_flow_2019], and continuous normalizing flows [grathwohl_ffjord_2018, chen_neural_2019]. Each approach presents unique advantages and challenges, depending on the type of dataset or application.

Non-autoregressive invertible flows (IFs) are a promising class of models that allow for exact likelihood training (unlike VAEs), can be efficiently sampled from (unlike autoregressive models), and do not require a discriminator network (unlike GANs). It does this by implementing a model that can tractably produce invertible, non-linear transformations as well as calculating its respective Jacobian by using partitioned bijective functions. However, they have so far failed to reach state-of-the-art density estimation benchmarks set by some of these other models. This issue can be attributed to the IF procedure resembling sequential local refinements rather than a global adjustment of the model distribution (see a visual demonstration of this in Ref. 

[grathwohl_ffjord_2018]).

Instead of looking to improve the expressiveness of our bijective transformations in IF models, in this paper we look at improving the expressiveness of the base distribution of our model. Typically, a Gaussian distribution of zero mean and unit variance is used as the base distribution in IF models, but we draw from the strengths of established EBMs into a new framework, which we call EBM-Flow. Our model looks to use an EBM prior to model global features, and then use an IF model for sequential refinement.

Joint adversarial training of IFs and EBMs was previously explored in Ref. [gao_flow_2020]

in order to utilise the fact that both EBMs and IFs can be trained via maximum-likelihood estimation, and that EBMs do not make any assumption on the target distribution, which is modeled by a global scalar function using a relatively small number of parameters. However, the training of continuous EBMs requires expensive Markov Chain Monte Carlo (MCMC) techniques such as Hamiltonian Monte Carlo or Langevin dynamics 

[girolami_riemann_2011, song_chun_zhu_grade_1998].

Discrete EBMs such as restricted Boltzmann machines (RBMs) [ackley1985learning]

can be trained more efficiently by taking advantage of techniques such as persistent contrasting divergence (PCD) with block Gibbs sampling to allow for faster and more efficient MCMC mixing 

[tieleman_training_2008]. Therefore, they are less computationally expensive to train and scale compared to continuous EBMs. Furthermore, discrete EBMs can possibly take advantage of the emergence of new quantum technologies that can simulate and speed-up training of BMs with, for example, quantum annealers [vinci_path_2019].

When integrating discrete EBMs with IF models, one could use IFs that operate with discrete variables [hoogeboom_integer_2019, tran_discrete_2019], but their density estimation benchmarks are notably worse than IF models that operate in the continuous domain. Therefore, in our specific implementation, we look to perform a continuous smoothing of a discrete EBM as the base energy distribution, followed by a custom IF model (similar to the FLOW++ [ho_flow_2019] architecture, and outlined in Appendix A) for sequential refinement. Specifically, we look at using an RBM as our discrete EBM due to the efficient mixing we can get from using PCD with block Gibbs sampling. There have been several attempts in the literature to smooth RBMs to model continuous variables [khoshaman_gumbolt_2018, chu_restricted_2018]. We found the most effective technique to be the Gaussian smoothing introduced early on in Ref. [zhang_continuous_2012], and more recently in Ref. [hartnett_probability_2020]. The advantage of this technique is that it allows for unbiased, log-likelihood maximization of the smoothed RBM, since inference from the continuous to the discrete latent variables is exact. Other properties of the model are also retained through such as efficient sampling and density estimation. This introduces the main model tested in this paper - RBM-Flow.

To study the performance of RBM-Flow and the advantages of utilizing the smoothed RBM as base distribution for the flow, we perform several ablation experiments in which we keep the same structure for the IF but replace the smoothed RBM with less expressive base distributions. Among all the models considered, RBM-Flow is unique in the sense that it is the only model whose base distribution is capable of modeling multi-modal distributions. We thus show, both visually and quantitatively by evaluating the Inception Scores (IS) and Frechet Inception Distance (FID), that RBM-Flow is able to generate samples of better quality than other baseline models when trained on CIFAR10. Moreover, RBM-Flow is able to generate samples of higher quality with significantly lower training time.

We also focus on a particularly interesting ablation of the RBM-Flow model, D-Flow, which is obtained by simply setting all the couplings of the base RBM to zero. It turns out that the corresponding continuous base model of D-Flow is a product of a mixture of Gaussians with centers given by the values of a set of discrete latent variables . D-Flow can thus be seen as an implementation of an IF with discrete latent variables, thus generalizing previous work done within the VAE framework [khoshaman_gumbolt_2018, vahdat_dvae_2018, sadeghi_pixelvae_2019, khoshaman_quantum_2018, rolfe_discrete_2017]

. As far as we are aware, all the approaches involving VAEs require the implementation of smoothing techniques that result in training with respect to a biased loss function which is not guaranteed to be a lower bound to the likelihood 

[jang_categorical_2017, maddison_concrete_2017]

. The advantage of the D-Flow implementation is that training is performed by maximization of the exact likelihood of a continuous model, which is connected to a discrete latent model via exact inference. As we show, D-Flow is also able to encode global features when trained on the CIFAR10 dataset, while achieving a log-likelihood similar to that of a standard IF with Gaussian base model. The use of generative models with discrete latent variables is appealing for various applications, including scenarios in which unsupervised or semi-supervised learning is used to encode meaningful global features of the dataset into a set of discrete labels.

In this paper, we outline the background to RBM-Flow in section 2, before then defining EBM-Flows and it’s construction in section 3. Experimental analysis is conducted using ablative tests in section 4, where we compare the RBM, Bernoulli, independent Gaussian, and Multivariate Gaussian base distributions. Finally we conclude in section 5.

2 Background

2.1 Invertible Flow Models

The foundation of non-autoregressive invertible flow models (IF) is based on learning a differentiable, invertible, non-linear, bijective function (i.e. the flow)

, such that one can approximately map a base probability distribution

to the model distribution, , with , using a change of variables:

(1)

The usual choice is to have the base variables independently distributed according to a product of Gaussian distributions with zero mean and unit variance. IF models can then be trained by maximizing the log-likelihood of data samples passed through the trainable flow, , to the Gaussian prior. The key challenge is to provide an explicit parameterization of the flow that is powerful and expressive, and that also allows for efficient computation of the determinant of the Jacobian of . These two properties are necessary to enable IFs to accurately model complex data distributions, and to be computationally scalable to high-dimensional datasets.

The first step towards building a flow with the required properties is to add depth to the model though a chain of simpler flows called affine coupling layers:

(2)

Notice that the affine coupling layer is the identity transformation over one subset of the partitioned variables, such that it can be used to condition the transformation of the other subset, and therefore retain a tractable Jacobian. This requires the flow to have a minimum of 4 coupling layers interleaved with permutations and/or 1x1 invertible convolutions [kingma_glow_2018] in order to fully transform a distribution. In general, recent state-of-the-art flows will also include invertible normalization layers and implement a multi-scale architecture via appropriate squeeze and reshape transformations [kingma_glow_2018, dinh_density_2017, ho_flow_2019].

There are two more key technical advancements that are worth discussing with respect to IF. The first one is the implementation of variational dequantization. Dequantization is an important procedure when training continuous probability models with discrete datasets, such as natural images stored with 8 or 16 bit precision. In fact, the log-likelihood of a continuous model on discrete data is unbounded, thus causing instabilities and divergences during training 

[uria_rnade_2013]. The standard technique to train on a bounded loss function is to add uniform random noise to the training data . Variational dequantization improves this technique by using a trainable noise function as follows . The corresponding loss function can be seen as a (tighter) variational lower bound to the log-likelihood of the model on the original dataset. As shown in Ref. [ho_flow_2019] this greatly improves regularization and generalization of the model.

The second improvement is the use of more expressive coupling layers. Much of the previous work has exploited simple yet effective affine transformations (Eq. 2[dinh_nice_2015, dinh_density_2017, kingma_glow_2018], but the state of the art held by Flow++ [ho_flow_2019]

uses parameterized cumulative distribution functions for a mixture of

logistics (MixLogCDF):

(3)

With these advancements IFs have been recently scaled to generate high-resolution natural images that compete with Generative Adversarial Networks (GAN) in visual quality. However, despite providing fast generation as with GANs and unlike autoregressive models, IFs require a large number of parameters and training is computationally expensive.

We conclude this introduction to IFs by pointing out that IFs are typically discussed in the context of generative models with latent variable. IFs should be more appropriately considered as fully visible models because the relationship between and is deterministic, not probabilistic, and inference is not necessary.

2.2 Energy Based Models

Energy based models (EBM) are defined by a scalar energy function, , which represents a non-normalized log-probability for the configuration :

(4)

The key observation for training energy models is that evaluation of the normalization constant (partition function) is not necessary. Gradients can be computed as follows:

(5)

where the second term (the negative phase) is the expectation of the gradient of the energy function evaluated on the model samples . However, when modelling high-dimensional distributions, the negative phase becomes more difficult to evaluate as it can develop sharp modes which makes evaluation computationally expensive due to the long mixing times in the MCMC methods employed.

Nonetheless, EBMs are very appealing for their mathematical elegance and ability to model complex probability distributions with global features using a relatively small number of parameters. Modern implementations of EBMs [du_implicit_2019, gao_flow_2020, pmlr-v100-du20a]

have achieved impressive results on large scale datasets by using carefully designed deep neural networks to regularize the energy function and improve mixing of MCMC simulations, but heavily use expensive techniques such as Hamiltonian Monte Carlo and Langevin dynamics 

[girolami_riemann_2011, song_chun_zhu_grade_1998].

2.3 Boltzmann Machines

Using discrete EBMs are appealing due to their likeness to physical systems in nature. Boltzmann Machines (BM) are a class of discrete EBMs in which the space of configurations is now composed of an ensemble of spins , and a quadratic energy functional:

(6)

The training of large BMs is still computationally expensive but it is significantly more efficient for a limiting case called the restricted Boltzmann machine (RBM). This is where we restrict the BM into a bipartite composed of hidden units that support visible units from which we take samples from. With RBMs, we can scale to larger models due to the availability of less computationally expensive MCMC techniques such as contrastive divergence (CD) [hinton_fast_2006], and Persistent CD (PCD) [tieleman_training_2008], coupled with block Gibbs sampling.

2.4 Gaussian Smoothing of Boltzmann Machines

Probability distributions of binary discrete variables can be efficiently modelled with BMs, therefore, there has been a long-time interest in the research community to adapt RBMs to model continuous variables too. The simplest approach trains a BM by interpreting continuous data as the expectations needed to compute the gradients of the RBM. Other more mathematically justified approaches involve fully continuous models such as the harmonium [smolensky_information_1986, welling_exponential_2005], and continuous-discrete models such as RBMs with Gaussian visible units and discrete latent units [chu_restricted_2018].

We are interested in a continuous smoothing of a BM which retains all the advantages in fast and efficient training with PCD sampling, and has a mathematically well-defined continuous formulation that enables log-likelihood training with continuous data. The smoothing technique we found ideal for our goals was introduced in Ref. [zhang_continuous_2012], and it was used more recently in connection with a study on spin glasses in Ref. [hartnett_probability_2020]. This smoothing technique we describe now is a simple application of Gaussian square completion. Given a BM with the following energy function:

(7)

one introduces a set of continuous variables such that:

(8)

where

is a multivariate Normal distribution with centers

and covariance matrix . Hence we refer to the variables as a Gaussian smoothing of the discrete variables. The constant must be chosen large enough to ensure that the covariance matrix

is positive-definite. The log-probability of the joint distribution

can then be written as follows:

(9)

The discrete variables appear linearly in the joint distribution, such that it can be marginalized out to get:

(10)

We thus have smoothed the BM with the effective continuous scalar energy function defined above. Notice that sampling from the continuous model can be performed via ancestral sampling after sampling first from the discrete BM: . Moreover, the normalization constant can be computed from the knowledge of the normalization constant of the BM, , via the following relationship [hartnett_probability_2020]:

(11)

Sampling and evaluation of the discrete latent model is computationally less challenging, but has the trade-off of having to perform inversion of the matrix and compute the respectively.

3 EBM-Flow

3.1 EBM-Flow Formalism

In this section we define the EBM-Flow, a class of EBMs whose scalar energy functional is obtained by a base energy functional via an IF-induced change of variables. The interpretation of EBM-Flow is realised through the generated samples of the target distribution still being samples from an EBM. It follows from Eqs. 1 and 4 by definition:

(12)

where we have defined the EBM-Flow energy functional to be:

(13)

The partition functions are also preserved through the change of variables:

(14)

Therefore one can see that by joining the EBM framework with an IF model, the IF transformations simply become an extension to the EBM, thus resulting in a model that utilises the strengths of both EBMs and IFs. This permits EBM-Flow to have a more complex energy function, , despite only sampling from a base EBM that corresponds to a potentially simpler base scalar energy functional, .

3.2 RBM-Flow

RBM-Flow is a special implementation of an E-Flow in which the base model is a Gaussian smoothed RBM, as described in section 2.4:

(15)

where is a positive-definite matrix, is chosen to be the connectivity of a Restricted Boltzmann machine, and is an IF of the Flow++ type (see details in the Appendix A, and in Ref. [ho_flow_2019]). Note that we cannot definitively know a priori what choice of to use that ensures to be positive semi-definite during training, a condition required for the Gaussian smoothing.

The advantage of implementing RBM-Flow is that sampling from the continuous base model can be performed via ancestral sampling of the discrete variables. This can be done very efficiently via block-Gibbs sampling and PCD. To obtain the continuous base variables , one has to sample from the multivariate Normal distribution . The cost of this operation is given by an inversion followed by a Cholesky decomposition of the matrix , both of which have a complexity of . Given that CIFAR10 only generated matrices with dimensions of a few thousand, these operations were carried out relatively fast.

If scaling to large sizes is needed, one can remove the bipartite restriction of an RBM to either enable a fully connected BM, or an RBM where the visible units have inter-connectivity. This allows us to directly use a Cholesky-decomposed ansatz for the connectivity matrix: , at the cost of sampling from a fully connected BM instead. Moreover, an ansatz that enables efficient inversion of can be described by:

(16)

which describes an RBM where the visible layer now has inter-connectivity between units.

3.3 D-Flow

D-Flow is obtained by setting the coupling matrix of RBM-Flow to zero:

(17)

where now is a positive constant. Notice that the resulting continuous base model of D-Flow is a mixture of Gaussians , with the locations given by the discrete variables , which are distributed according to a set of independent Bernoulli variables with probabilities .

As we shall see in the next section, when we discuss our numerical experiments, this simplification allows D-Flow to meaningfully encode representations of the data with discrete variables, . The training of D-Flow is also unbiased and performed via likelihood maximization and has fully propagating gradients. This is to be contrasted with other techniques used in the literature to implement generative models with discrete latent variables (especially VAEs), which require biased smoothing of the original model to efficiently propagate gradients through the discrete variables. Notice that the continuous energy model at the base of D-Flow can thus be considered as a smoothing technique for discrete variables, and could be used for the priors of VAEs as well.

4 Experiments

(a)   Bits per Dim.
(b)   FID.
(c)   IS.
Figure 1: a) Density estimation performance of each model for the CIFAR10 dataset measured in bits per dimension. For true density estimation of models with EBM base distributions, the partition function was calculated using annealed importance sampling. b) The Frechet inception distance (FID) between the model generated CIFAR10 samples and the test dataset. c) Inception scores (IS) of model generated CIFAR10 samples. Each model has approximately 31.4M parameters and an IF model architecture as described in Appendix A.
CIFAR10
Model Bits per Dimension Inception Score Frechet Inception Distance

RBM-Flow (500 epochs)

D-Flow (500 epochs)
MultiCov-Flow (500 epochs)
Gaussian Flow (500 epochs)
Flow++ (converged) - -
Glow (converged) - -
Real-NVP (converged) - -
Table 1: Image modeling results. While RBM-Flow achieve similar performance to other ablated models in terms of bits/dim, it significantly outperforms them in terms of the Inception Score and the Frechet Inception Distance.

The goal of our experiments is to understand if an EBM-Flow can improve density modeling of IFs. To do so, we focus on implementing an RBM-Flow, as described in the previous section, and perform a series of ablation experiments in order to determine the role played by the base model in the IF.

The IF used in all models is a custom implementation of Flow++, which is the state-of-the art for density modeling with IFs. The main difference between our implementation and Flow++ is that we do not perform data-dependent initialization, and we do not use a mixed logistic coupling layer in the variational dequantization model. While we have implemented self-attention, we have not used in the results we report due to computational constraints. We describe the flow in more details in Appendix A.

The models we chose for our comparison are RBM-Flow, D-Flow, a flow with an independent Normal distribution (Gaussian-Flow), and a flow with a multivariate Normal base distribution (MultiCov-Flow). The RBM-Flow D-Flow ablation removes all correlations from the base model but maintains the discrete latent variables. The RBM-Flow MultiCov-Flow ablation keeps the correlations modeled by the full covariance matrix , but removes the multi modality that can be captured with the latent RBM. Finally, the RBM-Flow Gaussian-Flow ablation removes all modality, co-variance and discrete latent variables to be inline with standard approach to density modeling with IFs.

(a)   RBM-Flow (500 epochs).
(b)   IF with Multivariate Gaussian (500 epochs).
(c)   D-Flow (500 epochs).
(d)   IF with independent Gaussian (500 epochs).
Figure 2: Qualitative analysis of the generated CIFAR10 samples agree with image quality metrics presented in Fig. 1, with their being more structure and color in the images generated using an RBM prior (Fig. 2(a)) compared to other priors, despite all models achieving similar likelihoods.

We have trained all our models on a single NVIDIA V100 GPU by choosing a small batch size of 20 on the CIFAR10 dataset for 500 epochs. All ablation experiments used the same flow architecture, but have differing numbers of filters and components in the mixed logistic coupling layer such that all models have roughly 31.4 million trainable parameters - a number which is similar to that of Flow++ for the CIFAR10 dataset.

To implement RBM-Flow, we have directly parameterized the weight matrix , where is a fixed constant and the variables are initialized to zero. Using a with L2 normalization of the RBM’s weights and biases ensured the positivity of in during training of RBM-Flow. This value of was also used in for the MultiCov-Flow ablation. We have picked in the case of D-Flow, since we have noticed that with a smaller D-Flow achieves better likelihood. We found that if were to become negative definite during training, it typically happened within the first few epochs. To train the RBM of RBM-Flow, we used PCD with 200 block Gibbs updates per gradient update.

(a)

  RBM-Flow on Fashion MNIST.

(b)   D-Flow on Fashion MNIST.
(c)   RBM-Flow on CIFAR10.
(d)   D-Flow in CIFAR10.
Figure 3: Generated images for both the Fashion MNIST and CIFAR10 datasets, where each column in every image is a single discrete sample which then conditions the Normal distribution in Eq. 8 to give the column of generated images. Discrete samples from both RBM-Flow and D-Flow can encode the global features of a sample in the fashion MNIST dataset. However, RBM-Flow is unable to do this for the CIFAR10 dataset, as each discrete sample is highly correlated to other discrete variables, such that no global features are visually encoded in the discrete sample.

Density modeling results are presented in Fig. 1(a) and Tab. 1, and we see that all models achieve similar performance in terms of bits/dim, with Gaussian-Flow and MultiCov-Flow performing lightly better at bits/dim, and RBM-Flow and D-Flow achieving and bits/dim respectively after the same number of epochs. However, these results are sensitive to the value of used in training, which was partially optimized to the aforementioned values. Attempts to optimize during training gave rise to instabilities in the model, caused by either diverging values and/or a non-positive semi-definite co-variance matrix.

In Fig. 2 we present generated samples from all models for comparison and visual inspection. We notice that the samples generated by RBM-Flow appear of higher quality, with more defined global correlation in terms of both color and structure. We quantified this visual perception by computing the Inception Score (IS) and the Frechet Inception Distance (FID). Numerical results are presented in Fig. 1(b), Fig. 1(c), and Tab. 1 where it is shown that that RBM-Flow achieves significantly better IS and FID scores than other ablation models. In particular, Figs. 1(b) and 1(c) show that RBM-Flow is able to achieve higher FID and IS scores much more quickly than other ablation models, a property that can be confirmed also by visually inspecting generated samples. RBM-Flow is the only model to have a base distribution able to capture complex correlations among the base variables with a multi modal distribution. The results presented in this section are strong evidence that the smoothed RBM base model is key to improve the overall quality and consistency of the generated samples.

4.1 Encoding Global Features in Discrete Base Variables with D-Flow

In certain applications, there might be advantages in developing generative models with discrete latent spaces. Such models could more efficiently learn to encode global features of the original dataset, via unsupervised training, into a discrete representation. This problem has recently been solved in the context of VAEs [vahdat_dvae_2018, sadeghi_pixelvae_2019, khoshaman_gumbolt_2018, khoshaman_quantum_2018, rolfe_discrete_2017]. To the best of our knowledge, RBM-Flow and D-Flow represent the first implementation of an IF with a discrete latent space. We would like to understand the role played by the discrete variables , and whether they learn meaningful features.

In Fig. 3 we show generated samples from models trained on Fashion MNIST and CIFAR10. For each column, we use the same discrete latent space configuration, , which is then smoothed using Eq. 8. When trained on Fashion MNIST, both RBM-Flow and D-Flow encode global features of the generated samples in the discrete latent configurations. When trained on a more complex dataset such as CIFAR10, D-Flow still encodes global features in the discrete latent space while RBM-Flow does not. It is not exactly clear why RBM-Flow is less effective than D-Flow at encoding visible features in the discrete latent variables, and it might be worth to investigate this issue more in subsequent work.

5 Conclusions

We have proposed EBM-Flows, e.g. Invertible Flow (IF) models with Energy Based Models (EBM) as trainable base distribution. As we have shown in section 3, EBM-Flows are themselves EBMs with a scalar energy function that is given by a simple coordinate transformation, given by the IF, of the energy function of the base EBM. EBM-Flows thus represent a class of interesting generative models that can combine the strengths of both IFs and EBMs to produce samples of superior quality.

Using a Restricted Boltzmann machine (RBM) as the EBM, we also introduce an EBM-Flow sub-class called RBM-Flow, which has a Flow++-like architecture and an RBM base model that is continuously smoothed. Implementing RBM-Flow allowed us to perform fast and reliable sampling from the base model, which can be performed via block Gibbs sampling of the underling RBM. This also allowed us to obtain accurate evaluation of the log-likelihood of the model. We have studied the performance of RBM-Flow via a series of ablative experiments, where we keep the number of trainable parameters constant but remove the capability of the base model to represent multi-modal probability distributions. While RBM-Flow performs similarly in terms of log-likelihood to the other ablated models, it generates samples of higher visual quality. We have confirmed this quantitatively by evaluating the Inception Score (IS) and the Frechet Inception Distance (FID) for all models.

Among the ablated models considered, D-Flow is especially interesting. D-Flow is obtained from RBM-Flow by setting to zero all the couplings of the underlying latent RBM. While both RBM-Flow and D-Flow are built with a discrete latent space as part of their base distribution, only D-Flow is able to consistently encode global features of the generated samples into its discrete latent configurations. D-Flow can thus be seen as a genuine implementation of an IF with meaningful discrete latent space representations, generalizing to IFs similar work done with variational autoencoders (VAEs) 

[vahdat_dvae_2018, sadeghi_pixelvae_2019, khoshaman_gumbolt_2018, khoshaman_quantum_2018, rolfe_discrete_2017].

As a future work, it would be interesting to explicitly implement an EBM-Flow model by leveraging recent advances in the implementation of EBMs [du_implicit_2019, song_generative_2019, gao_flow_2020] and their integration with generative models [NCP-VAE, VAEBM, gao_flow_2020]. We believe such EBM-Flows could be able to improve the state-of-the-art in generative sampling with standalone EBMs or IFs. Moreover, we notice that the smoothing technique introduced with RBM-Flow and D-Flow could be easily integrated with VAEs. It would be interesting to compare the performance of such models with Gaussian smoothing of RBM and Bernoulli variables to previously introduced VAEs with discrete latent variables.

Another interesting future line of research is the implementation of quantum-assisted training of RBM-Flows with quantum devices. Recent developments with quantum technologies suggest early quantum devices could speed up sampling from Boltzmann machines [vinci_path_2019]. This idea has motivated the development of quantum/classical hybrid generative models based on VAEs that are amenable to quantum-assisted training. RBM-Flow now gives the possibility to investigate quantum-assisted training of hybrid models based on invertible flows.

Acknowledgements

We are grateful for support from NASA Ames Research Center, the AFRL Information Directorate under grant F4HBKC4162G001, and the Office of the Director of National Intelligence (ODNI) and the Intelligence Advanced Research Projects Activity (IARPA), via IAA 145483. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, AFRL, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon.

Appendix A Appendix: RBM-IF Architecture

The general framework used for RBM-IF is based off the current state-of-the-art model from FLOW++ [ho_flow_2019], where the outline of its architecture can be seen in Fig. 4. The three key features to be included into RBM-IF from FLOW++ were the MixLogCDF transformations (Eq. 2.1), the variational dequantization model, and the residual gated networks used to support the MixLogCDF transformations. For the most part, each of these features was mirrored in RBM-IF, with exception to minor alterations in the variational dequantization model and residual gated convolutions to reduce training times.

Dequant. Flow

Pre-Process & Checkerboard Split

Flow

Inverse Checkerboard & Squeeze

Flow

Checkerboard Split

Flow

Normalization

1x1 Invertible Convolution

MixLogCDF Transformation

Permutation

Flow =

Dequantization

Figure 4: General overview of the framework used in FLOW++ [ho_flow_2019]. The network shown is that used for training the FLOW++ model, but for generation and inference the dequantization flow is removed. Checkerboard split and squeeze processes refer to dimension reshaping processes used in FLOW++ to increase the number of dimensions in the image channel such that the flow can ensure the equal partitioning of inputs.

Firstly, the variational dequantization model used in RBM-IF was still composed of 4 conditioned flow processes, each similar to what is seen in the top right of Fig. 4, but with an additional input into the MixLogCDF process to condition the transformation of the noise sample with the data sample. However, in RBM-Flow we replace the MixLogCDF transformation in the dequantization flow with a simpler affine transformation, as it was found not to significantly affect losses and improved training times. Secondly, we omitted the multi-head self attention found in the residual gated convolution process for the purposes of training speed as well.

As detailed in the main text, the RBM prior was the major change in the IF architecture compared to the simpler Gaussian prior used in most IF models. The main overheads incurred by using the RBM prior come from three sources of extra computation. Firstly, matrix inversion and Cholesky decomposition is required in training to define the co-variance matrix of the smoothed RBM prior; secondly, block Gibbs samples are generated in training, and finally, in order to evaluate the true likelihood, the logarithm of the partition function is also calculated, but this is only done once at the end of an epoch. Despite this, all of these sources of additional overhead do not hinder the training process as much as one would expect, such that we still have efficient sampling and density estimation.

It is also worth noting that our reported loss was evaluated using raw test samples, and not with variational dequantization enabled. This meant that we measured against the exact likelihood instead of a variational bound introduced by the dequantization, so our results cannot be directly compared to FLOW++ as we do not use importance sampling.

Finally, we compensate for the additional parameters incurred by using energy based priors by changing the number of parameters in the IF model. This is done by changing the number of convolutional filters and the number of components in the MixLogCDF transformation, which is detailed in Tab. 2. Other model hyper-parameters include a constant learning rate of , batch size of , and a dropout rate of .

Gaussian RBM Bernoulli
No. Convolutional Filters 100 96 100
MixLogCDF Components 33 32 33
No. Gated Convolutions 10 10 10
No. Trainable Parameters 31,445,192 31,420,584 31,448,264
Table 2: Number of model parameters used for each prior in testing with CIFAR10, and their respective hyper-parameters used. All other aspects of each model are identical.

The code for the RBM-IF model was created within TensorFlow(TF) v2.1.0 using TensorFlow Probability (TFP) v0.9.0 bijectors contained within custom Keras layers. A custom training loop is also used in order to implement conditional transformations for the variational dequantization model, as well as for the passing of training arguments, which not all TFP bijectors support.