# Continuous-Time Flows for Deep Generative Models

Normalizing flows have been developed recently as a method for drawing samples from an arbitrary distribution. This method is attractive due to its intrinsic ability to approximate a target distribution arbitrarily well. In practice, however, normalizing flows only consist of a finite number of deterministic transformations, and thus there is no guarantees on the approximation accuracy. In this paper we study the problem of learning deep generative models with continuous-time flows (CTFs), a family of diffusion-based methods that are able to asymptotically approach a target distribution. We discretize the CTF to make training feasible, and develop theory on the approximation error. A framework is then adopted to distill knowledge from a CTF to an efficient inference network. We apply the technique to deep generative models, including a CTF-based variational autoencoder and an adversarial-network-like density estimator. Experiments on various tasks demonstrate the superiority of the proposed CTF framework compared to existing techniques.

## Authors

• 71 publications
• 55 publications
• 22 publications
• 25 publications
• 15 publications
• 169 publications
• ### A Variational Perspective on Diffusion-Based Generative Models and Score Matching

Discrete-time diffusion-based generative models and score matching metho...
06/05/2021 ∙ by Chin-Wei Huang, et al. ∙ 0

• ### Variational Mixture of Normalizing Flows

In the past few years, deep generative models, such as generative advers...
09/01/2020 ∙ by Guilherme G. P. Freitas Pires, et al. ∙ 18

• ### Theoretical guarantees for sampling and inference in generative models with latent diffusions

We introduce and study a class of probabilistic generative models, where...
03/05/2019 ∙ by Belinda Tzen, et al. ∙ 0

• ### Copula & Marginal Flows: Disentangling the Marginal from its Joint

Deep generative networks such as GANs and normalizing flows flourish in ...
07/07/2019 ∙ by Magnus Wiese, et al. ∙ 2

• ### Robust normalizing flows using Bernstein-type polynomials

Normalizing flows (NFs) are a class of generative models that allows exa...
02/06/2021 ∙ by Sameera Ramasinghe, et al. ∙ 0

• ### Deep Generative Learning via Variational Gradient Flow

We propose a general framework to learn deep generative models via Varia...
01/24/2019 ∙ by Gao Yuan, et al. ∙ 42

• ### Network Bending: Manipulating The Inner Representations of Deep Generative Models

We introduce a new framework for interacting with and manipulating deep ...
05/25/2020 ∙ by Terence Broad, et al. ∙ 14

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Efficient inference and robust density estimation are two important goals in unsupervised learning. In fact, they can be unified from the perspective of learning desired target distributions. In inference problems, one targets to learn a tractable distribution for a latent variable that is close to a given unnormalized distribution (e.g., a posterior distribution in a Bayesian model). In density estimation, one tries to learn an unknown data distribution only based on the samples from it. It is also helpful to make a distinction between two types of representations for learning distributions: explicit and implicit methods (Mohamed & Lakshminarayanan, 2017). Explicit methods provide a prescribed parametric form for the distribution, while implicit methods learn a stochastic procedure to directly generate samples from the unknown distribution.

Existing deep generative models can easily be identified from this taxonomy. For example, the standard variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) is an important example of an explicit inference method. Within the inference arm (encoder) of a VAE, recent research has focused on improving the accuracy of the approximation to the posterior distribution on latent variables (codes) using normalizing flow (NF) (Rezende & Mohamed, 2015). NF is particularly interesting due to its ability to approximate the posterior distribution arbitrarily well, while maintaining explicit parametric forms. On the other hand, Stein VAE (Feng et al., 2017; Pu et al., 2017) is an implicit inference method, as it only learns to draw samples to approximate posteriors, without assuming an explicit form for the distribution.. For density estimation on observed data, the generative adversarial network (GAN) can be regarded as an implicit density estimation method (Ranganath et al., 2016; Huszár, 2017; Mohamed & Lakshminarayanan, 2017), in the sense that one may sample from the distribution (regarded as a representation of the unknown distribution), but an explicit form for the distribution is not estimated. GAN has recently been augmented by Flow-GAN (Grover et al., 2017) to incorporate a likelihood term for explicit density estimation. Further, the real-valued non-volume preserving (real NVP) transformations algorithm (Dinh et al., 2017) was proposed to perform inference within the implicit density estimation framework.

Some aforementioned methods rely on the concept of flows

. A flow defines a series of transformations for a random variable (RV), such that the distribution of the RV evolves from a simple distribution to a more complex distribution. When the sequence of transformations are indexed on a discrete-time domain (

e.g., indexed with integers) with a finite number of transformations, this method is referred to as a normalizing flow (Rezende & Mohamed, 2015). Various efficient implementations of NFs have been proposed, such as the planar, radial (Rezende & Mohamed, 2015), Householder (Tomczak & Welling, 2016), and inverse autoregressive flows (Kingma et al., 2016). One theoretical limitation of existing normalizing flows is that there is no guarantee on the approximation accuracy due to the finite number of transformations.

By contrast, little work has explored the applicability of continuous-time flows (CTFs) in deep generative models, where a sequence of transformations are indexed on a continuous-time domain (e.g., indexed with real numbers). There are at least two reasons encouraging research in this direction: i) CTFs are more general than traditional normalizing flows in terms of modeling flexibility, due to the intrinsic infinite number of transformations; ii) CTFs are more theoretically grounded, in the sense that they are guaranteed to approach a target distribution asymptotically (details provided in Section 2.2).

In this paper, we propose efficient ways to apply CTFs for the two motivating tasks. Based on the CTF, our framework learns to drawn samples directly from desired distributions (e.g., the unknown posterior and data distributions) for both inference and density estimation tasks via amortization. In addition, we are able to learn an explicit form of the unknown data distribution for density estimation***Although the density is represented as an energy-based distribution with an intractable normalizer.

. The core idea of our framework is the amortized learning, where knowledge in a CTF is distilled sequentially into another neural network (called

inference network in the inference task, and generator in density estimation). The distillation relies on the distribution matching technique proposed recently via adversarial lerning (Li et al., 2017a). We conduct various experiments on both synthetic ad real datasets, demonstrating excellent performance of the proposed framework, relative to representative approaches.

## 2 Preliminaries

### 2.1 Efficient inference and density estimation

#### Efficient inference with normalizing flows

Consider a probabilistic generative model with observation and latent variable such that with . For efficient inference of , the VAE (Kingma & Welling, 2014) introduces the concept of an inference network (recognition model or encoder), , as a variational distribution in the VB framework. An inference network is typically a stochastic (nonlinear) mapping from the input to the latent , with associated parameters . For example, one of the simplest inference networks is defined as , where the mean function and the standard-derivation function are specified via deep neural networks parameterized by . Parameters are learned by minimizing the negative evidence lower bound (ELBO), i.e., the KL divergence between and :

(Bottou, 2012).

One limitation of the VAE framework is that is often restricted to simple distributions for feasibility, e.g.

, the normal distribution discussed above, and thus the gap between

and is typically large for complicated posterior distributions. NF is a recently proposed VB-based technique designed to mitigate this problem (Rezende & Mohamed, 2015). The idea is to augment via a sequence of deterministic invertible transformations , such that: .

Note the transformations are typically endowed with different parameters, and we absorb them into . Because the transformations are deterministic, the distribution of can be written as via the change of variable formula. As a result, the negative ELBO for normalizing flows becomes:

 KL(qϕ(zK|x)∥p\mathchar28946\relax(x,z))=Eqϕ(z0|x)[logqϕ(z0|x)] (1) −Eqϕ(z0|x)[logp\mathchar28946\relax(x,zK)]−Eqϕ(z0|x)[K∑k=1log|det∂Tk∂zk|].

Typically, transformations of a simple parametric form are employed to make the computations tractable (Rezende & Mohamed, 2015). Our method generalizes these discrete-time transformations to continuous-time ones, ensuring convergence of the transformations to a target distribution.

#### Related density-estimation methods

There exist implicit and explicit density-estimation methods. Implicit density models such as GAN provide a flexible way to draw samples directly from unknown data distributions (via a deep neural network (DNN) called a generator with stochastic inputs) without explicitly modeling their density forms; whereas explicit models such as the pixel RNN/CNN (van den Oord et al., 2016) define and learn explicit forms of the unknown data distributions. This gives the advantage that the likelihood for a test data point can be explicitly evaluated. However, the generation of samples is typically time-consuming due to the sequential generation nature.

Similar to Wang & Liu (2017), our CTF-based approach in Section 4 provides an alternative way for this problem, by simultaneously learning an explicit energy-based data distribution (estimated density) and a generator whose generated samples match the learned data distribution. This not only gives us the advantage of explicit density modeling but also provides an efficient way to generate samples. Note that our technique differs from that of Wang & Liu (2017) in that distribution matching is adopted to learn an accurate generator, which is a key component in our framework.

### 2.2 Continuous-time flows

We notice two potential limitations with traditional normalizing flows: i) given specified transformations , there is no guarantee that the distribution of could exactly match ; ii) the randomness is only introduced in (from the inference network), limiting the representation power. We specify CTFs where transformations are indexed by real numbers, thus they could be considered as consisting of infinite transformations. Further, we consider stochastic flows where randomness is injected in a continuous-time manner. In fact, the concept of CTFs (such as the Hamiltonian flow) has been introduced by Rezende & Mohamed (2015), without further development on efficient inference.

We consider a flow on , defined as the mappingWe reuse the notation as transformations from the discrete case above for simplicity, and use instead of (reserved for the discrete-time setting) to denote the random variable in the continuous-time setting. such thatNote we define continuous-time flows in terms of latent variable in order to incorporate it into the setting of inference. However, the same description applies when we define the flow in data space, which is the setting of density estimation in Section 4. we have and , for all and . A specific form consider here is defined as , where is driven by a diffusion of the form:

 dZt=F(Zt)dt+V(Zt)dW . (2)

Here , are called the drift term and diffusion term, respectively; is the standard -dimensional Brownian motion. In the context of inference, we seek to make the stationary distribution of approach . One solution for this is to set and with the identity matrix. The resulting diffusion is called Langevin dynamics (Welling & Teh, 2011). Denoting the distribution of as , it is well known (Risken, 1989) that is characterized by the Fokker-Planck (FP) equation:

 ∂tρt=−∇z⋅(ρtF(Zt)+∇z⋅(ρtV(Zt)V⊤(Zt))) , (3)

where

for vectors

and .

For simplicity, we consider the flow defined by the Langevin dynamics specified above, though our results generalize to other stochastic flows (Dorogovtsev & Nishchenko, 2014). In fact, CTF has been applied for scalable Bayesian sampling (Ding et al., 2014; Li et al., 2016a; Chen et al., 2016; Li et al., 2016b; Zhang et al., 2017). In this paper, we generalize it by specifying an ELBO under a CTF, which can then be readily solved by a discretized numerical scheme, based on the results from Jordan et al. (1998). An approximation error bound for the scheme is also derived. We defer proofs of our theoretical results to the Supplementary Material (SM) for conciseness.

## 3 Continuous-Time Flows for Inference

For this task, we adopt the VAE/normalizing-flow framework with an encoder-decoder structure. An important difference is that instead of feeding data to an encoder and sampling a latent representation in the output as in VAE, we concatenate the data with independent noise as input and directly generate output samples§§§Such structure can represent much more complex distributions than a parametric form, useful for the follow up procedures. In contrast, we will define an explicit energy-based distribution for the density in density-estimation tasks., constituting an implicit model. These outputs are then driven by the CTF to approach the true posterior distribution. In the following, we first show that directly optimizing the ELBO is infeasible. We then propose an amortized-learning process that sequentially distills the implicit transformations from the CTF an inference network by distribution matching in Section 3.2.

### 3.1 The ELBO and discretized approximation

We first incorporate CTF into the NF framework by writing out the corresponding ELBO. Note that there are two steps in the inference process. First, an initial is drawn from the inference network ; second, is evolved via a diffusion such as (2) for time (via the transformation ). Consequently, the negative ELBO for CTF can be written as

 F(x) =Eqϕ(z0|x)EρT[logρT−logp\mathchar28946\relax(x,ZT) +log∣∣∣det∂ZT∂z0∣∣∣]≜Eqϕ(z0|x)[F1(x,z0)] . (4)

Note the term is intractable to calculate, in that does not have an explicit form; the Jacobian is generally infeasible. In the following, we propose an approximate solution for problem . Learning by avoiding problem is presented in Section 3.2 via amortization.

For problem , a reformulation of the results from Jordan et al. (1998) leads to a nice way to approximate in Lemma 1. Note in practice we adopt an implicit method which uses samples to approximate the solution in Lemma 1 for feasibility, detailed in (6).

###### Lemma 1

Assume that is infinitely differentiable, and for some constants . Let ( is the stepsize and is the number of transformations), , and be the solution of the functional optimization problem:

 ~ρk=argminρ∈KKL(ρ∥p\mathchar28946\relax(x,z))+12hW22(~ρk−1,ρ) , (5)

with the square of 2nd-order Wasserstein distance, and

the set of joint distributions on

.

is the space of distributions with finite 2nd-order moment. Then

converges to in the limit of , i.e., , where is the solution of the FP equation (3) at time .

Lemma 1 reveals an interesting way to compute via a sequence of functional optimization problems. By comparing it with the objective of the traditional NF, which minimizes the KL-divergence between and , at each sub-optimization-problem in Lemma 1, it minimizes the KL-divergence between and , plus a regularization term as the Wasserstein distance between and . The extra Wasserstein-distance term arises naturally due to the fact that the Langevin diffusion can be explained as a gradient flow whose geometry is equipped with the Wasserstein distance (Otto, 1998).

The optimization problem in Lemma 1 is, however, difficult to deal with directly. In practice, we instead approximate the discretization in an equivalent way by simulation from the CTF. Starting from , each is fed into a transformation (specified below), resulting in whose distribution coincides with in Lemma 1. The discretization procedure is illustrated in Figure 1. We must specify the transformations . For each , let ; we can conclude from Lemma 1 that . From FP theory, is obtained by solving the diffusion (2) with initial condition . It is thus reasonable to specify the transformation as the -th step of a numerical integrator for (2). Specifically, we specify stochastically:

 zk=Tk(zk−1)≜zk−1+F(zk−1)h+V(zk−1)ζk , (6)

where is drawn from an isotropic normal. Note the transformation defined here is stochastic, thus we only get samples from at the end. A natural way to approximate is to use the empirical sample distribution, i.e., with a point mass at . Afterwards, (thus ) will be used to approximate the true from (3).

Better ways to approximate might be possible, e.g., by assigning more weights to the more recent samples. However, the problem becomes more challenging in theoretical analysis, an interesting point left for future work. In the following, we study how well approximates . Following literature on numerical approximation for Itô diffusions (Vollmer et al., 2016; Chen et al., 2015), we consider a 1-Lipschitz test function , and use the mean square error (MSE) bound to measure the closeness of and , defined as: , where the expectation is taken over all the randomness in the construction of . Note that our goal is related but different from the standard setup as in (Vollmer et al., 2016; Chen et al., 2015), which studies the closeness of to . We need to adopt the assumptions from Vollmer et al. (2016); Chen et al. (2015), which are described in the SM. The assumptions are somewhat involved but essentially require coefficients of the diffusion (2) to be well-behaved. We derive the following bound for the MSE of the sampled approximation, , and the true distribution.

###### Theorem 2

Under Assumption 1 in the SM, assume that and there exists a constant such that , the MSE is bounded as

 MSE(¯ρT,ρT;ψ)=O(1hK+h2+e−2ChK) .

The last assumption in Theorem 2 requires to evolve fast through the FP equation, which is a standard assumption used to establish convergence to equilibrium for FP equations (Bolley et al., 2012). The MSE bound consists of three terms, the first two terms come from numerical approximation of the continuous-time diffusion, whereas the third term comes from the convergence bound of the FP equation in terms of the Wasserstein distance (Bolley et al., 2012). When the time is large enough, the third term may be ignored due to its exponential-decay rate. Moreover, in the infinite-time limit, the bound endows a bias proportional to ; this, however, can be removed by adopting a decreasing-step-size scheme in the numerical method, as in standard stochastic gradient MCMC methods (Teh et al., 2016; Chen et al., 2015).

###### Remark 3

To examine the optimal bound in Theorem 2, we drop out the term in the long-time case (when is large enough) for simplicity because it is in a much lower order term than the other terms. The optimal MSE bound (over ) decreases at a rate of , meaning that steps of transformations in Figure 1 (right) are needed to reach an -accurate approximation, i.e., . This is computationally expensive. An efficient way for inference is thus imperative, developed in the next section.

### 3.2 Efficient inference via amortization

Even though we approximate with , it is still infeasible to directly apply it to the ELBO in (3.1) as is discrete. To deal with this problem, we adopt the idea of “amortized learning” (Gershman & Goodman, 2014) for inference, by alternatively optimizing the two sets of parameters and .

#### Updating ϕ

To explain the idea, first note that the negative ELBO can be equivalently written as

 F(x)=Eρ0≜qϕ(z0|x)EρT[logρ0−logp\mathchar28946\relax(x,ZT)] . (7)

When , it is easy to see that: , which essentially makes the gap between and vanished. As a result, our goal is to learn such that approaches . This is a distribution matching problem (Li et al., 2017a). As mentioned previously, we will learn an implicit distribution of (i.e., learn how to draw samples from instead of its explicit form), as it allows us to chose a candidate distribution from a much larger distribution space, compared to explicitly defining This is distinct from our density-estimation framework described in the next section, where an explicit form is assumed at the beginning for practical needs.. Consequently, is implemented by a stochastic generator (a DNN parameterized by ) with input as the concatenation of and , where

is a sample from an isotropic Gaussian distribution

. Our goal is now translated to update the parameter of to such that the distribution of with matches that of in the original generating process with in Figure 1. In this way, the generating process of via is distilled into the parameterized generator , eliminating the need to do a specific transformation via in testing, and thus is very efficient. Specifically, we update such that

 ϕ′=argminϕD({z′(i)0},{z(i)1}) , (8)

where are a set of samples generated from via , and are samples drawn by ; is a metric between samples specified below. We call this procedure distilling knowledge from to . In practice, one can choose to distill knowledge for several steps (e.g., ) instead of one step (e.g., ) to each time. Note the distillation idea is related to Bayesian dark knowledge (Korattikara et al., 2015), but with different goal and approach.

After distilling knowledge from , we apply the same procedure for other transformations sequentially. The final inference network, represented by , can then well approximate the CTF, e.g., the distribution of is close to from the CTF. This concept is illustrated in Figure 2. We note choosing an appropriate in (8) is important in order to make Theorem 2 applicable. Amortized SVGD (Wang & Liu, 2017) defines as standard Euclidean distance between samples. We show in Proposition 4 that this would induce a large error in terms of approximation accuracy.

###### Proposition 4

Fix . If in (8) is defined as the summation of pairwise Euclidean distance between samples, then samples generated from converge to local modes of .

Consequently, it is crucial to impose more general distance for . As GAN has been interpreted as distribution matching (Li et al., 2017a), we define using the Wasserstein distance, implemented as a discriminator parameterized by a neural network. Specifically, we adopt the ALICE framework (Li et al., 2017a), and use as fake data and as real data to train a discriminator. More details are discussed in Section C of the SM.

#### Updating \mathchar28946\relax

Given , can be updated by simply optimizing the ELBO in (7), where is approximated by from the discretized CTF. Specifically, the expectation w.r.t. in (7) is approximated by a sample average from:

To sum up, there are three steps to learn a CTF-based VAE:

• Generate according to and the discretized flow with transformations ;

• Update according to (8);

• Optimize by minimizing the ELBO (7) with the generated sample path.

In testing, we use only the finally learned

for inference (into which the CTF has been distilled), and hence testing is like the standard VAE. Since the discretized-CTF model is essentially a Markov chain, we call our model Markov-chain-based VAE (MacVAE).

## 4 CTFs for Energy-based Density Estimation

We assume that the density of the observation

is characterized by a parametric Gibbsian-style probability model

, where is an unnormalized version of with parameter , is called the energy function (Zhao et al., 2017), and is the normalizer. Note this form of distributions constitutes a very large class of distributions as long as the capacity of the energy function is large enough. This can be easily achieved by adopting a DNN to implement , the setting we considered in this paper. Note our model can be placed in between existing implicit and explicit density estimation methods, because we model the data density with an explicit distribution form up to an intractable normalizer. Such distributions have been proved to be useful in real applications, e.g., (Haarnoja et al., 2017)

used them to model policies in deep reinforcement learning.

Our goal is to learn given , which can be achieved via standard maximum likelihood estimation (MLE): .

The optimization can be achieved by standard stochastic gradient descent (SGD), with the following gradient formula:

 ∂M∂\mathchar28946\relax=1NN∑i=1∂U(xi;\mathchar28946\relax)∂\mathchar28946\relax−Ep\mathchar28946\relax(x)[∂U(x;\mathchar28946\relax)∂\mathchar28946\relax] (9)

The above formula requires an integration over the model distribution , which can be approximated by Monte Carlo integration with samples. Here we adopt the idea of CTFs and propose to use a DNN guided by a CTF, which we call a generator, to generate approximate samples from the original model . Specifically, we require that samples from the generator should well approximate the target . This can be done by adopting the CTF idea above, i.e., distilling knowledge of a CTF (which approaches ) to the generator. In testing, instead of generating samples from via MCMC (which is complicated and time consuming), we generate samples from the generator directly. Furthermore, when evaluating the likelihood for test data, the constant can also be approximated by Monte Carlo integration with samples drawn from the generator.

Note the first term on the RHS of (9) is a model fit to observed data, and the second term is a model fit to synthetic data drawn from ; this is similar to the discriminator in GANs (Arjovsky et al., 2017), but derived directly from the MLE. More connections are discussed below.

### 4.1 Learning via Amortization

Our goal is to learn a generator whose generated samples match those from the original model .Similar to inference setting, the generator is learned implicitly. However, we also learn an explicit density model for the data by SGD, with samples from the implicit generator to estimate gradients in (9). Note that in this case, the CTF is performed directly on the data space, instead of on latent-variable space as in previous sections. Specifically, the sampling procedure from the generator plus a CTF are written as:

 x0∼qϕ(x0),xT∼T(x0,T) .

Here is the continuous-time flow; a sample from is implemented by a deep neural network (generator) with input , where is a simple distribution for a noise random variable, e.g., the isotropic normal distribution. The procedure is illustrated in Figure 3.

Specifically, denote the parameters in the -th step with subscript “”. For efficient sample generation, in the -th step, we again adopt the amortization idea from Section 3.2 to update of the generator network , such that samples from the updated generator match those from the current generator followed by a one-step transformation . After that, is updated by drawing samples from to estimate the expectation in (9). The algorithm is presented in Algorithm 1 in Section E of the SM.

### 4.2 Connections to WGAN and MLE

There is an interesting relation between our model and the WGAN framework (Arjovsky et al., 2017). To see this, let be the data distribution. Substituting with for the expectation in the gradient formula (9) and integrating out , we have that our objective is

 maxEx∼pr[U(x;\mathchar28946\relax)]−Ex∼qϕ[U(x;\mathchar28946\relax)] (10)

This is an instance of the integral probability metrics (Arjovsky & Bottou, 2017). When is chosen to be 1-Lipschitz functions, it recovers WGAN. This connection motivates us to introduce weight clipping (Arjovsky et al., 2017) or alternative regularizers (Gulrajani et al., 2017) when updating for a better theoretical property. For this reason, we call our model Markov-chain-based GAN (MacGAN).

Furthermore, it can be shown by Jensen’s inequality that the MLE is bounded by (detailed derivations are provided in Section F of the SM)

 max1NN∑i=1logp\mathchar28946\relax(xi) (11) ≤maxEx∼pr[U(x;\mathchar28946\relax)]−Ex∼qϕ[U(x;\mathchar28946\relax)]−Ex∼qϕ[logqϕ] .

By inspecting (10) and (11), it is clear that: i

) when learning the energy-based model parameters

, the objective can be interpreted as maximizing an upper bound of the MLE shown in (11); ii) when optimizing the parameter of the inference network, we adopt the amortized learning procedure presented in Algorithm 1, whose objective is , coinciding with the last two terms in (11). In other words, both and are optimized by maximizing the same upper bound of the MLE, guaranteeing convergence of the algorithm, although previous work has pointed out maximizing an upper bound is not a well-posed problem in general (Salakhutdinov & Hinton, 2012).

###### Proposition 5

The optimal solution of MacGAN is the maximum likelihood estimator.

Note another difference between MacGAN and standard GAN framework is the way of learning the generator . We adopt the amortization idea, which directly guides to approach ; whereas in GAN, the generator is optimized via a min-max procedure to make it approach the empirical data distribution . By explicitly learning , MacGAN is able to evaluate likelihood for test data up to a constant.

## 5 Related Work

Our framework extends the idea of normalizing flows (Rezende & Mohamed, 2015) and gradient flows (Altieri & Duvenaud, 2015) to continuous-time flows, by developing theoretical properties on the convergence behavior. Inference based on CTFs has been studied in (Sohl-Dickstein et al., 2015) based on maximum likelihood and (Salimans et al., 2015) based on the auxiliary-variable technique. However, they directly uses discrete approximations for the flow, and the approximation accuracy is unclear. Moreover, the inference network requires simulating a long Markov chain for the auxiliary model, thus is less efficient than ours. Finally, the inference network is implemented as a parametric distribution (e.g., the Gaussian distribution), limiting the representation power, a common setting in existing auxiliary-variable based models (Tran et al., 2016). The idea of amortization (Gershman & Goodman, 2014)

has recently been explored in various research topics for Bayesian inference such as in variational inference

(Kingma & Welling, 2014; Rezende et al., 2014) and Markov chain Monte Carlo (Wang & Liu, 2017; Li et al., 2017b; Pu et al., 2017). Both (Wang & Liu, 2017) and (Pu et al., 2017) extend the idea of Stein variational gradient descent (Liu & Wang, 2016) with amortized inference for a GAN-based and a VAE-based model, respectively, which resemble our proposed MacVAE and MacGAN in concept. Li et al. (2017b) applies amortization to distill knowledge from MCMC to learn a student network. The ideas in (Li et al., 2017b) are similar to ours, but the motivation and underlying theory are different from that developed here. The authors proposed several divergence measures for distribution matching including the Jensen-Shannon divergence, similar to our method.

## 6 Experiments

We conduct experiments to test our CTF-based framework for efficient inference and density estimation problems, and compared them with related methods. Some experiments are based on the excellent code for SteinGAN (Wang & Liu, 2017), where their default parameter setting are adopted. The discretization stepsize is robust as long as it is set in a reasonable range, e.g., we set it the same as the stepsize in SGD. More experimental results are given in the SM, including a sensitiveness experiment on model parameters in Section G.4.

### 6.1 CTFs for inference

#### Synthetic experiment

We examine our amortized learning framework with three toy experiments. Particularly, we want to verify the necessity of distribution matching defined in (8), i.e., we test implemented as a discriminator for Wasserstein distance (adversarial-CTF) against that implemented with standard Euclidean distance (-CTF), which can be considered as an instance of the amortized MCMC (Li et al., 2017b) with a Langevin-dynamic transition function and a Euclidean-distance-based divergence measure for samples. Two 2D distributions similar to (Rezende & Mohamed, 2015) are considered, defined in Section D of the SM. The inference network is defined to be a 2-layer MLP with isotropic normal random variables as input. Figure 4 plots the densities estimated with the samples from transformations (before optimizing ), as well as with samples generated directly from (after optimizing ). It is clear that the amortized learning with Wasserstein distance is able to distill knowledge from the CTF to the inference network, while the algorithm fails when Euclidean distance is adopted.

Next, we test MacVAE on a VAE setting on a simple synthetic dataset containing 4 data points, each is a 4D one-hot vector, with the non-zero elements at different positions. The prior of latent code is a 2D standard Normal. Figure 5 plots the distribution of the learned latent code for VAE, adversarial-CTF and -CTF. Each color means the codes for one particular observation. It is observed that VAE divides the space into a mixture of 4 Gaussians (consistent with VAE theory), the adversarial-CTF learns complex posteriors, while the -CTF converges to the mode of each posterior (consistent with Proposition 4).

#### MacVAE on MNIST

Following (Rezende & Mohamed, 2015; Tomczak & Welling, 2016)

, we define the inference network as a deep neural network with two fully connected layers of size 300 with softplus activation functions. We compare MacVAE with the standard VAE and the VAE with normalizing flow, where testing ELBOs are reported (Section

G.1 of the SM describes how to calculate the ELBO). We do not compare with other state-of-the-art methods such as the inverse autoregressive flow (Kingma et al., 2016), because they typically endowed more complicated inference networks (with more parameters), unfair for comparison. We use the same inference network architecture for all the models. Figure 7

(left) plots the testing ELBO versus training epochs. MacVAE outperforms VAE and normalizing flows with a better ELBO (around -85.62).

### 6.2 CTFs for density estimation

We test MacGAN on three datasets: MNIST, CIFAR-10 and CelabA. Following GAN-related methods, the model is evaluated by observing its ability to draw samples from the learned data distribution. Inspiring by (Wang & Liu, 2017), we define a parametric form of the energy-based model as , where and

are encoder and decoder defined by using deep convolutional neural networks and deconvolutional neural networks, respectively, parameterized by

. For simplicity, we adopt the popular DCGAN architecture (Radford et al., 2016) for the encoder and decoder. The generator

is defined as a 3-layer CNN with the ReLU activation function (except for the top layer which uses tanh as the activation function, see SM

G for details). Following (Wang & Liu, 2017), the stepsizes are set to , where indexes the epoch, is the total number of epochs, when updating , and when updating . The stepsize in is set to 1e-3.

We compare MacGAN with DCGAN (Radford et al., 2016), the improved WGAN (WGAN-I) (Gulrajani et al., 2017) and SteinGAN (Wang & Liu, 2017). We plot images generated with MacGAN and its most related method SteinGAN in Figure 6 for CelebA and CIFAR-10 datasets. More results are provided in SM Section G. We observe that visually MacGAN is able to generate clear-looking images. Following (Wang & Liu, 2017), we also plot the images generated by a random walk in the space in Figure 6.

Qualitatively evaluating a GAN-like model is challenging. We follow literature and use the inception score (Salimans et al., 2016) to measure the quantity of the generated images. Figure 7 (right) plots inception scores vs epochs for different models. MacGAN obtains competitive inception scores with the popular DCGAN model. Quantitatively, we get a final inception score of 6.49 for MacGAN, compared to 6.35 for SteinGAN, 6.25 for WGAN-I and 6.58 for DCGAN.

## 7 Conclusion

We study the problem of applying CTFs for efficient inference and explicit density estimation in deep generative models, two important tasks in unsupervised learning. Compared to discrete-time NFs, CTFs are more general and flexible due to the fact that their stationary distributions can be controlled without extra flow parameters. We develop theory on the approximation accuracy when adopting a CTF to approximate a target distribution. We apply CTFs on two classes of deep generative models, a variational autoencoder for efficient inference, and a GAN-like density estimator for explicit density estimation and efficient data generation. Experiments show encouraging results of our framework in both models compared to existing techniques. One interesting direction of future work is to explore more efficient learning algorithms for the proposed CTF-based framework.

## Acknowledgements

We thank the anonymous reviewers for their useful comments. This research was supported in part by DARPA, DOE, NIH, ONR and NSF.

## References

• Altieri & Duvenaud (2015) Altieri, N. and Duvenaud, D. Variational inference with gradient flows. In NIPS workshop on Advances in Approximate Bayesian Inference, 2015.
• Arjovsky & Bottou (2017) Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
• Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein GAN. In ICML, 2017.
• Bolley et al. (2012) Bolley, F., Gentil, I., and Guillin, A. Convergence to equilibrium in wasserstein distance for fokker–planck equations. Journal of Functional Analysis, 263(8):2430–2457, 2012.
• Bottou (2012) Bottou, L. Stochastic gradient descent tricks. Technical report, Microsoft Research, Redmond, WA, 2012.
• Chen et al. (2015) Chen, C., Ding, N., and Carin, L. On the convergence of stochastic gradient MCMC algorithms with high-order integrators. In NIPS, 2015.
• Chen et al. (2016) Chen, C., Carlson, D., Gan, Z., Li, C., and Carin, L. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In AISTATS, 2016.
• Ding et al. (2014) Ding, N., Fang, Y., Babbush, R., Chen, C., Skeel, R. D., and Neven, H. Bayesian sampling using stochastic gradient thermostats. In NIPS, 2014.
• Dinh et al. (2017) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real NVP. In ICLR, 2017.
• Dorogovtsev & Nishchenko (2014) Dorogovtsev, A. A. and Nishchenko, I. I. An analysis of stochastic flows. Communications on Stochastic Analysis, 8(3):331–342, 2014.
• Feng et al. (2017) Feng, Y., Wang, D., and Liu, Q. Learning to draw samples with amortized Stein variational gradient descent. In UAI, 2017.
• Gershman & Goodman (2014) Gershman, S. J. and Goodman, N. D. Amortized inference in probabilistic reasoning. In Annual Conference of the Cognitive Science Society, 2014.
• Givens & Shortt (1984) Givens, C. R. and Shortt, R. M.

A class of wasserstein metrics for probability distributions.

Michigan Math. J., 31, 1984.
• Grover et al. (2017) Grover, A., Dhar, M., and Ermon, S. Flow-GAN: Bridging implicit and prescribed learning in generative models. Technical Report arXiv:1705.08868, 2017.
• Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. Improved training of Wasserstein GAN. In NIPS, 2017.
• Haarnoja et al. (2017) Haarnoja, T., Tang, H., Abbeel, P., and Levine, S. Reinforcement learning with deep energy-based policies. In ICML, 2017.
• Huszár (2017) Huszár, F. Variational inference using implicit distributions. Technical Report arXiv:1702.08235, 2017.
• Jordan et al. (1998) Jordan, R., Kinderlehrer, D., and Otto, F. The variational formulation of the Fokker-Planck equation. SIAM J. MATH. ANAL., 29(1):1–17, 1998.
• Kingma et al. (2016) Kingma, D., Salimans, T. P., and Welling, M. Improving variational inference with inverse autoregressive flow. In NIPS, 2016.
• Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In ICLR, 2014.
• Korattikara et al. (2015) Korattikara, A., Rathod, V., Murphy, K., and Welling, M. Bayesian dark knowledge. In NIPS, 2015.
• Li et al. (2016a) Li, C., Chen, C., Carlson, D., and Carin, L. Preconditioned stochastic gradient Langevin dynamics for deep neural networks. In AAAI, 2016a.
• Li et al. (2016b) Li, C., Steven, A., Chen, C., and Carin, L. Learning weight uncertainty with stochastic gradient MCMC for shape classification. In CVPR, 2016b.
• Li et al. (2017a) Li, C., Liu, H., Chen, C., Pu, Y., Chen, L., Henao, R., and Carin, L. ALICE: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017a.
• Li et al. (2017b) Li, Y., Turner, R. E., and Liu, Q. Approximate inference with amortised MCMC. Technical Report arXiv:1702.08343, 2017b.
• Liu & Wang (2016) Liu, Q. and Wang, D. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In NIPS, 2016.
• Mattingly et al. (2010) Mattingly, J. C., Stuart, A. M., and Tretyakov, M. V. Construction of numerical time-average and stationary measures via Poisson equations. SIAM Journal on Numerical Analysis, 48(2):552–577, 2010.
• Mohamed & Lakshminarayanan (2017) Mohamed, S. and Lakshminarayanan, B. Learning in implicit generative models. Technical Report arXiv:1610.03483, 2017.
• Otto (1998) Otto, F. Dynamics of Labyrinthine pattern formation in magnetic fluids: A mean-field theory. Arch. Rational Mech. Anal., pp. 63–103, 1998.
• Pu et al. (2017) Pu, Y., Gan, Z., Henao, R., Li, C., Han, S., and Carin, L. VAE learning via Stein variational gradient descent. In NIPS, 2017.
• Radford et al. (2016) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. Technical Report arXiv:1511.06434, January 2016.
• Ranganath et al. (2016) Ranganath, R., Altosaar, J., Tran, D., and Blei, D. M. Operator variational inference. In NIPS, 2016.
• Rezende & Mohamed (2015) Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. In ICML, 2015.
• Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D.

Stochastic backpropagation and approximate inference in deep generative models.

In ICML, 2014.
• Risken (1989) Risken, H. The Fokker-Planck equation. Springer-Verlag, New York, 1989.
• Salakhutdinov & Hinton (2012) Salakhutdinov, R. and Hinton, G. An efficient learning procedure for deep machines. Neural Computation, 24(8):1967–2006, 2012.
• Salimans et al. (2015) Salimans, T., Kingma, D. P., and Welling, M. Markov chain Monte Carlo and variational inference: Bridging the gap. In ICML, 2015.
• Salimans et al. (2016) Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training GANs. In NIPS, 2016.
• Sohl-Dickstein et al. (2015) Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015.
• Teh et al. (2016) Teh, Y. W., Thiery, A. H., and Vollmer, S. J. Consistency and fluctuations for stochastic gradient Langevin dynamics. JMLR, 17(1):193–225, 2016.
• Tomczak & Welling (2016) Tomczak, J. M. and Welling, M. Improving variational auto-encoders using Householder flow. Technical Report arXiv:1611.09630, November 2016.
• Tran et al. (2016) Tran, D., Ranganath, R., and Blei, D. M. The variational Gaussian process. In ICLR, 2016.
• van den Oord et al. (2016) van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. In ICML, 2016.
• Vollmer et al. (2016) Vollmer, S. J., Zygalakis, K. C., and Teh, Y. W.

Exploration of the (Non-)asymptotic bias and variance of stochastic gradient Langevin dynamics.

JMLR, 1:1–48, 2016.
• Wang & Liu (2017) Wang, D. and Liu, Q. Learning to draw samples: With application to amortized MLE for generative adversarial learning. In ICLR workshop, 2017.
• Welling & Teh (2011) Welling, M. and Teh, Y. W. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
• Zhang et al. (2017) Zhang, Y., Chen, C., Gan, Z., Henao, R., and Carin, L. Stochastic gradient monomial Gamma sampler. In ICML, 2017.
• Zhao et al. (2017) Zhao, J., Mathieu, M., and LeCun, Y. Energy-based generative adversarial networks. In ICLR, 2017.

## Appendix A Assumptions of Theorem 2

First, let us define the infinitesimal generator of the diffusion (2). Formally, the generator of the diffusion (2) is defined for any compactly supported twice differentiable function , such that,

 Lf(Zt)=≜limh→0+E[f(Zt+h)]−f(Zt)h =(F(Zt)⋅∇+12(G(Zt)G(Zt)T):∇∇T)f(Zt) ,

where , , means approaches zero along the positive real axis.

Given an ergodic diffusion (2) with an invariant measure , the posterior average is defined as: for some test function of interest. For a given numerical method with generated samples , we use the sample average defined as to approximate . We define a functional that solves the following Poisson Equation:

 L~ψ(zk)=ψ(zk)−¯ψ (12)

We make the following assumptions on .

###### Assumption 1

exists, and its up to 4rd-order derivatives, , are bounded by a function , i.e., for , . Furthermore, the expectation of on is bounded: , and is smooth such that , for some .

## Appendix B Proofs for Section 3

Proof [Sketch Proof of Lemma 1] First note that (5) in Lemma 1 corresponds to eq.13 in (Jordan et al., 1998), where in (Jordan et al., 1998) is in the form of in our setting.

Proposition 4.1 in (Jordan et al., 1998) then proves that (5) has a unique solution. Theorem 5.1 in (Jordan et al., 1998) then guarantees that the solution of (5) approach the solution of the Fokker-Planck equation in (3), which is in the limit of .

Since this is true for each (thus each in ), we conclude that in the limit of .

To prove Theorem 2, we first need a convergence result about convergence to equilibrium in Wasserstein distance for Fokker-Planck equations, which is presented in (Bolley et al., 2012). Putting in our setting, we can get the following lemma based on Corollary 2.4 in (Bolley et al., 2012).

###### Lemma 6 ((Bolley et al., 2012))

Let be the solution of the FP equation (3) at time , be the joint posterior distribution given . Assume that and there exists a constant such that . Then

 W2(ρT,p(x,z))≤W2(ρ0,p(x,z))e−CT . (13)

We further need to borrow convergence results from (Mattingly et al., 2010; Vollmer et al., 2016; Chen et al., 2015) to characterize error bounds of a numerical integrator for the diffusion (2). Specifically, the goal is to evaluate the posterior average of a test function , defined as . When using a numerical integrator to solve (2) to get samples , the sample average is used to approximate the posterior average. The accuracy is characterized by the mean square error (MSE) defined as: . Lemma 7 derives the bound for the MSE.

###### Lemma 7 ((Vollmer et al., 2016))

Under Assumption 1, and for a 1st-order numerical intergrator, the MSE is bounded, for a constant independent of and , by

 E(^ψK−¯ψ)2≤C(1hK+h2) .

Furthermore, except for the 2nd-order Wasserstein distance defined in Lemma 1, we define the 1st-order Wasserstein distance between two probability measures and as

 W1(μ1,μ2)≜infp∈P(μ1,μ2)∫∥x−y∥2p(dx,dy) . (14)

According to the Kantorovich-Rubinstein duality (Arjovsky et al., 2017), is equivalently represented as

 (15)

where is the space of 1-Lipschitz functions .

We have the following relation between and .

###### Lemma 8 ((Givens & Shortt, 1984))

We have for any two distributions and that .

Now it is ready to prove Theorem 2.

Proof [Proof of Theorem 2] The idea is to simply decompose the MSE into two parts, with one part charactering the MSE of the numerical method, the other part charactering the MSE of and , which consequentially can be bounded using Lemma 6 above.

Specifically, we have

 = E(1KK∑k=1ψ(zk)−∫ψ(z)ρT(z)dz)2 = E((1KK∑k=1ψ(zk)−∫ψ(z)p\mathchar28946\relax(x,z)dz) −(∫ψ(z)ρT(z)dz−∫ψ(z)p\mathchar28946\relax(x,z)dz))2 (1)= E(1KK∑k=1ψ(zk)−∫ψ(z)p\mathchar28946\relax(x,z)dz)2 +(∫ψ(z)ρT(z)dz−∫ψ(z)p\mathchar28946\relax(x,z)dz)2 (2)≤ E(1KK∑k=1ψ(zk)−∫ψ(z)p\mathchar28946\relax(x,z)dz)2+W21(ρT,p\mathchar28946\relax) (3)≤ E(1KK∑k=1ψ(zk)−∫ψ(z)p\mathchar28946\relax(x,z)dz)2+W22(ρT,p\mathchar28946\relax) (4)≤ C1(1hK+h2)+W22(ρ0,p(x,z))e−2CT = O(1hK+h2+e−2ChK) ,

where “(1)” follows by the fact that