Extending Stan for Deep Probabilistic Programming

09/30/2018 ∙ by Javier Burroni, et al. ∙ ibm 0

Deep probabilistic programming combines deep neural networks (for automatic hierarchical representation learning) with probabilistic models (for principled handling of uncertainty). Unfortunately, it is difficult to write deep probabilistic models, because existing programming frameworks lack concise, high-level, and clean ways to express them. To ease this task, we extend Stan, a popular high-level probabilistic programming language, to use deep neural networks written in PyTorch. Training deep probabilistic models works best with variational inference, so we also extend Stan for that. We implement these extensions by translating Stan programs to Pyro. Our translation clarifies the relationship between different families of probabilistic programming languages. Overall, our paper is a step towards making deep probabilistic programming easier.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


Deep Probabilistic Programming Language

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Ideally, artificial-intelligence models would be both highly accurate and easy to understand. But while both accuracy and understandability can be accomplished separately, they are hard to combine in the same model. On the one hand, deep neural networks can become highly accurate by learning from large-scale data to reshape distributions in intricate ways 

LeCun et al. (2015). On the other hand, probabilistic programming can model how domain experts understand data is generated based on real-world concepts with overt uncertainty Ghahramani (2015). A recent research paper has proposed deep probabilistic programming to combine these advantages Tran et al. (2017). We aim at bringing deep probabilistic programming to the main-stream.

For our starting point, we look at main-stream technology in both fields. Stan is a popular probabilistic programming language that lets users write probabilistic models in a high-level, self-contained syntax and static type system Carpenter et al. (2017). PyTorch is a popular framework for deep neural networks in Python that can learn sophisticated distributions from large-scale data Facebook (2016).

This paper shows how to extend Stan with deep neural networks. Instead of implementing a deep learning framework for Stan to add support for deep probabilistic programming, we chose to compile Stan programs to Pyro 

Uber (2017), a deep probabilistic programming language built on top of PyTorch that is currently in a Beta release. This approach has the following advantages: (1) Pyro is built on top of PyTorch, the programmer can thus seamlessly import neural networks designed with the state-of-the-art API provided by PyTorch. (2) Variational inference was central in the design of Pyro. Programmers can easily craft their own inference guides which is often critical to run variational inference on deep probabilistic models. (3) Pyro also offers alternative inference methods, like NUTS. We can thus compare the results of our approach with the original Stan implementation on classic probabilistic models.

From one perspective, we contribute a new backend for Stan that can learn highly accurate distributions from large-scale data. From another perspective, we contribute a new frontend for Pyro that makes it easy for domain experts to express understandable models in a self-contained language.

For our language and compiler, we had to overcome two technical obstacles: variational inference and translating the semantics of Stan programs to Pyro.

Variational inference (VI) is a method for training probabilistic models. For decades, the dominant method for training probabilistic models has been Markov-chain Monte Carlo (MCMC) sampling. But VI is easier to scale to the large data needed in training the complex non-linear distribution shapes involved in deep probabilistic models 

Blei et al. (2017). Thus, MCMC is the default in Stan and VI is the default in Pyro. VI posits a family  of densities for posteriors of variables to be learned, then finds the member that is closest to the data. The user must specify the family , also known as the variational guide. We extend Stan with a high-level syntax for specifying variational guides.

Although both Stan and Pyro are termed probabilistic programming languages, they belong to two different families. In Pyro, programs are expressed as generative models with conditioning statements. It is similar to BLOG (Milch et al., 2007), Church (Goodman et al., 2012), IBAL (Pfeffer, 2001), Anglican (Tolpin et al., 2015), or WebPPL (Goodman & Stuhlmüller, 2014). On the other hand, in Stan, a program describes a log-density function that will be optimized during the inference given a set of data. We thus had to design a compilation scheme that bridges the gap between these two language families.

This paper makes the following contributions:

  • Extending Stan for variational inference with high-level but explicit guides.

  • Extending Stan with a clean interface to neural networks written in Python.

  • A compiler from Stan to Pyro that carefully chooses the appropriate sampling semantics.

To evaluate our language and compiler, we implemented a variety of probabilistic models (both traditional and deep). We train them with both MCMC and VI, in both Stan and Pyro, where applicable. Our results show that the training yields almost the same probability distributions modulo implementation-specific deviations. Overall, this paper takes a substantial step towards making the combination of deep and probabilistic models more usable.

2 Language Extension Overview

This section uses examples to illustrate our Stan language extensions, DeepStan, which enable explicit variational inference and probabilistic models that involve deep neural networks.

We start with a simple Stan program from Section 2 of the Stan manual Stan Development Team (2016)

that uses neither of our extensions. The program estimates the bias of a coin from a set of coin tosses. We write

if the result of the i-th coin toss is head and

otherwise. We assume that individual coin tosses are independent and identically distributed (IID) and that each coin toss follows a Bernoulli distribution with parameter 

, that is, and . The task is to infer the distribution of the latent (unobserved) variable  given the observations of the variables . Figure 1 shows the corresponding Stan program and graphical model.

data {   int N;   int<lower=0,upper=1> x[N]; } parameters {   real<lower=0,upper=1> z; } model {   z ~ Beta(1, 1);   x ~ Bernoulli(z); }
Figure 1: Learning the bias of a coin

The data block introduces observations, which are placeholders for concrete data to be provided as input to the inference procedure. The parameters

block introduces latent random variables, which will not be provided and must be inferred. The graphical model shows observed variables gray and latent variables white. All variables are explicitly typed and the type can also declare constraints on their possible values. For example,


is a continuous random variable that can take any value between 

and . The model block describes the model. It places a (i.e., uniform) prior on , that is, initially  can take all values between  and  with equal probability. The model then assumes that each coin toss x[i] follows a Bernoulli distribution with parameter 

using a vectorized statement which can also be written:

for (i in 1:N)
  x[i] ~ Bernoulli(z);

We can then launch the inference methods offered by Stan to fit the model to the provided data, thus learning the value of latent variable . By default, Stan uses NUTS (No-U-Turn Sampler), which is an optimized HMC (Hamiltonian Monte-Carlo) algorithm, which in turn is a form of MCMC.

2.1 Explicit Variational Inference

Variational inference turns the inference problem into an optimization problem and tends to be faster and more adapted to large datasets than MCMC methods Blei et al. (2017).

Variational inference tries to find the member  of a family  of simpler distributions that is the closest to the true posterior . Members of the family are characterized by the values of the variational parameter . The fitness of a candidate is measured using the Kullback-Leibler (KL) divergence from the true posterior, a similarity measure between probability distributions.

In the following, we call guide the distribution family used to guide the variational inference.

Proposing a guide that can be used to compute meaningful results can be challenging. Stan offers a form of black-box variational inference called ADVI Kucukelbir et al. (2015) that automatically synthesizes a suitable guide. ADVI supports a broad class of models. However, for some models, in particular deep probabilistic models, the user may want to manually craft their own guide.

To allow the user to design their own guide, we extended DeepStan with two new blocks: guide parameters and guide. The guide block defines a distribution parameterized by the guide parameters. Variational inference then optimizes the values of these parameters to approximate the true posterior. Note that unlike Stan parameters that define random variables to be used in the model, guide parameters are the hyper-parameters characterizing the family of distributions that will be optimized during inference.

Going back to the coin example, we can now explicitly ask the inference to find the best Beta distribution to approximate the true posterior.

111In this particular case, the exact posterior is a Beta distribution. guide parameters {   real<lower=0> alpha_q;   real<lower=0> beta_q; } guide {   z ~ Beta(alpha_q, beta_q); } Inference then computes the optimal value of parameters  and given the data. Notice that the graphical model represents guide parameters with rectangles to indicate that they are not random variables.

2.2 Deep Probabilistic Models

Deep probabilistic models are probabilistic model involving neural networks. Variational Auto-Encoders (VAE) Kingma & Welling (2013); Rezende et al. (2014) are a popular example of such models.

The task is learning a vector-space representation  for each observed data point 

(e.g., a handwritten digit in the the classic MNIST dataset). The computed representation can then be use as input of other models (e.g., a classifier). Each data point 

depends on the latent representation  in a complex non-linear way, via a deep neural network: the decoder. The top half of Figure 2 shows the corresponding graphical model. The output of the decoder is a vector  that parameterizes a Bernoulli distribution over each pixel in the image . Each pixel is thus associated to a probability of being present in the image. The parameter  of the decoder is global (i.e., shared by all data points).

Figure 2: Graphical model of the VAE (model and guide).

The main idea of the VAE is to use variational inference to learn the latent representation. The guide maps each  to a latent variable  via another neural network: the encoder. The bottom half of Figure 2 shows the graphical model of the guide. The network encoder returns, for each image , the parameters  and 

of a Gaussian distribution in the latent space. Again the parameter 

of the network is global. Then inference tries to learn good values for the parameters  and , simultaneously training the decoder and the encoder, according to the data.

networks {
  Decoder decoder;
  Encoder encoder;
data {
  int<lower=0, upper=1> x[28, 28];
parameters {
  real z[_];
model {
  real mu[_, _];
  z ~ Normal(0, 1);
  mu = decoder(z);
  x ~ Bernoulli(mu);
guide {
  real encoded[2, _] = encoder(x);
  real mu_z[_] = encoded[1];
  real sigma_z[_] = encoded[2];
  z ~ Normal(mu_z, sigma_z);
Figure 3: Variational Auto-Encoder (VAE)

Figure 3 shows the corresponding DeepStan code. We added a block networks to import neural networks definitions. The network class must be implemented in PyTorch. Figure 3 omits the dimension of the latent representation real z[_]. This information is already contained in the networks definition. To avoid error-prone redundant definitions, we implemented a shape inference that automatically computes missing dimensions (denoted by ’_’) when possible.

Section 5 will show that, in addition to using neural networks in the model, we can also lift the parameters of the network to random variables to obtain a Bayesian neural network Neal (2012). This is a neural network with weight uncertainty, whose parameters are probability distributions learnt during inference. It is then possible to sample from the computed distribution to obtain concrete instances of the network.

2.3 The Language Extensions

Figure 4 shows the grammar of DeepStan. As in Stan, a program is a succession of blocks. Objects defined in one block can be used in subsequent blocks. The non-terminal funs, data, tdata, params, tparams, and gquant corresponds to the original Stan blocks (respectively functions, data, transformed data, parameters, transformed parameters, and generated quantities). The only mandatory block is model.

DeepStan extends Stan with three additional optional blocks. Block networks declares neural networks at the beginning of the pipeline. Blocks guide parameters and guide specify an explicit guide for variational inference after the model (see Section 4).

program  ::= nets?  funs?  data?  tdata?  params?  tparams?
             model  gparams?  guide?  generated?
nets     ::= networks’ ’{’ ndecl* ’}gparams  ::= guide parameters’ ’{’ vdecl* ’}guide    ::= guide’ ’{’ vdecl* stmts* ’}ndecl    ::= className var ;’
Figure 4: Syntax of the language extensions

The non-terminals vdecl and stmt corresponds to variable declarations (type, name, dimensions) and statements. We omit their definition here for space reasons. The complete grammar of the Stan language can be found in Appendix C of the manual Stan Development Team (2016).

3 Compilation: From Stan to Pyro

This section shows how to compile Stan programs into Pyro generative models. We give the intuition of the compilation technique on simple examples.

3.1 Intuition

In Stan, the model

block represents the log of a probability density function (log-density function or lpdf)

222It operates in logarithm space to avoid numerical imprecision. on the constrained parameter space defined in the parameters block Carpenter et al. (2017). Given the value of the parameters, the log-density function provides the relative likelihood of the values of the random variables declared in the data block. A Stan program can thus be viewed as a function from parameters and data to the value of a special variable target that represents the log-density. Given a set of data points, the inference tries to find the optimal parameters that maximize the value of target.

A Stan model can be described using classic imperative statements (sequence, condition, loop), plus two special statements that modify the value of target. The first one, target+=, increments the value of target. The second one, expr ~ D(params), is semantically equivalent to target+=, where is a probability distribution and is the corresponding lpdf.

In Pyro, the target function is a hidden state of the program that can be updated only through the pyro.sample primitive. To understand the compilation from Stan to Pyro, we first need to understand the semantics of pyro.sample.

The expression pyro.sample(name, D(params)) does three things: first, draw a sample value from the distribution ; second, add to the log-density accumulator; and third, return value. Adding to the log-density accumulator is sometimes called factor or conditioning in the literature, and amounts to a soft constraint: it penalizes unlikely executions. The user must specify a unique name as a Python string. This name can be used by the programmer outside of the model to query the random variable corresponding to this sampler, and is also used internally by the inference algorithms.

The expression pyro.sample(name, D(z), obs=value) differs from the variant without the optional obs argument. Instead of drawing a sample value, it uses the given concrete value. In other words, this expression just applies the factor, that is, updates the log-density accumulator. In a typical Pyro program, it is used for observed variables, using the concrete values provided in the training dataset.

Since the Stan ~ statement only updates target and does not draw a sample from a distribution, we translate it to a pyro.sample call with the optional obs argument. For example, we translate the Stan code z ~ Beta(1,1) into the Pyro code pyro.sample(’z1’, Beta(1.,1.), obs=z). Similarly, we translate x ~ Bernoulli(z) into pyro.sample(’x’, Bernoulli(z), obs=x).

In Pyro, inference does not optimize a set of parameters declared outside the model, but rather optimize the random variables that are not observed (pyro.sample expressions without obs argument). So, we are compiling Stan parameters as unobserved sampled variables. For example, the declaration of the parameter real<lower=0,upper=1> z; is compiled to: z = pyro.sample(’z’, Uniform(0., 1.))

We choose to sample from the uniform distribution to draw a value in the definition domain of the parameter and because the log-density of the uniform distribution of parameters

and is . The previous code thus applies the following update to the target: target +=. Since that value is constant (it is the same in all executions), inference will normalize it away.

Putting it all together, a Stan model is compiled to a Python function whose parameters are the data. For instance, Figure 1 is compiled to:

def model(N, x):
  # Initialization
  z = pyro.sample(’z’, Uniform(0.,1.))
  # Compiled model
  pyro.sample(’z1’, Beta(1.,1.), obs=z)
  pyro.sample(’x’, Bernoulli(z), obs=x)

3.2 Challenges

We had to solve several difficulties to compile more advanced Stan models to Pyro.

Improper priors.

The previous example is well defined because the constraints on  define a bounded support (here ). But in Stan, it is also possible to partially constrain parameters. Consider the following parameter definition:

parameters {
  real<lower=0> z;

In the generated code, we add implicit uniform priors to all the parameter, and these implicit uniform priors are improper if the variable has unbounded support Carpenter et al. (2017)(Section 4.5), that is, the density function does not integrate to . Formally we should then add to the target. To overcome this issue, we introduce new ad-hoc distributions to sample semi-constrained or unconstrained parameters without incrementing the target. The previous example of parameter declaration is thus compiled to:

z = pyro.sample(’z’,

Sampling arbitrary expressions.

In Stan the left-hand side of the ~ operator is an arbitrary expression. For example the following is valid Stan code:

model {
  log(x) ~ Normal(0, 1);

This is not an issue in the generated code, since the ~ operator is compiled to a Pyro observation, which can also be an arbitrary expression. The previous example is compiled to:

pyro.sample(’e1’, Normal(0.,1.), obs=log(x))

As in Stan, this also permits multiple updates of the same parameter, which are compiled as a sequence of Pyro observations on the same parameter.

Explicit updates of the log-density.

Finally, in Stan, it is also possible to directly udpate the target function of the model using the reserved target variable:

model {
  target += f(z);

Pyro does not directly expose the target function to the programmer. To translate this statement, we use the exponential distribution with parameter 

whose log-density function is . Observing an expression  from this distribution applies the update . The previous model thus compiles to:

  Exponential(1), obs=-(f(z)))

4 Explicit Variational Inference

As explained in Section 2.1, variational inference approximates the true posterior  with the closest member of a family of simpler distributions , the guide. Members of the guide family are characterized by the value of the variational parameters . Inference optimizes these parameters to find the best candidate to approximate the true posterior: .

4.1 Definition of the guide

We extend DeepStan with two new blocks: guide parameters and guide. Variational parameters, defined in the guide parameters block, can be used in the guide block that describes the family . For instance, a guide for the model presented in Figure 1 is:

guide parameters {
  real<lower=0> alpha_q;
  real<lower=0> beta_q;
guide {
  z ~ Beta(alpha_q, beta_q);

Inference then tries to compute the optimal values of the parameters  and  such that is as close as possible to the true posterior of the model .

The guide parameters and guide blocks follow the syntax of the parameters and model blocks, respectively. But we need additional restrictions to be able to generate Pyro code.

Sampling all parameters.

First, the optimization problem solved by the inference must be well defined, that is, the guide must be defined on the same parameter space as the model. All the parameters of the model must thus be sampled in the guide. The DeepStan compiler checks this restriction statically for early error reporting.

Distribution without inference.

Second, the guide should describe a distribution from which we can directly draw valid samples without running the inference first. In particular, compared to Section 3, we cannot interpret the guide as a log-density function and run inference to find the parameters and then draw samples from the inferred distribution. This requirement further limits what can be expressed in the guide block.

Consider for instance the example of Section 3: log(x) ~ Normal(0,1). In theory, we could invert the log function and sample  from . However, in the general case, we do not know how to invert arbitrary expressions, or even which variable to choose to compute the inverse. For example should we invert x - mu ~ Normal(0, 1) with respect to  or ? To overcome these difficulties, the guide block only allows single variables on the left-hand side of the ~ operator.

Similarly, the guide block must not sample the same variable twice. For instance, the following sequence defines a valid model, but would be rejected in a guide.

x ~ Normal(0, 1);
x ~ Normal(0, 1);

In a model block, this sequence adds to the target, which is equivalent to x ~ Normal(0, sqrt(0.5)) (which adds ). Again, the computations required to simplify these expressions are in the general case unpractical and thus forbidden in the guide block.

Finally, the DeepStan compiler rejects explicit modifications of target, which is only used during inference.

4.2 Compiling the guide

Variational parameters declared in the guide parameters block are learned during inference. In Pyro, they must be registered using the pyro.param statement which creates a PyTorch learnable parameter from a unique name and an initial value. For instance, the guide parameters block of the coin model is compiled to:

alpha_q = pyro.param(’alpha_q’, torch.randn(()))
beta_q = pyro.param(beta_q’, torch.randn(()))

By default, the initial values of the parameters are sampled from a uniform distribution (torch.randn). The programmer can also choose arbitrary values that are assigned to the parameters before the inference.

With the restrictions discussed in the previous section, the compilation of the guide block becomes straightforward. The ~ operator can be directly compiled to a Pyro sample statement without observations. In Pyro, the guide is a python function that takes the same arguments as the model. For example, the guide of the coin model is compiled to:

def guide (x):
  z = pyro.sample(’z’, Beta(alpha_q, beta_q))

Note that all the parameters defined in the parameters block are sampled in the guide.

The restrictions imposed on the guide do not prevent the guide from being arbitrarily complex. The guide of the VAE example shown in Figure 3 is a deep probabilistic model that involves a neural network.

5 Adding Neural Networks

Deep probabilistic models can be grouped into two broad categories: models involving deep neural networks to capture complex dynamics between random variables, e.g., the VAE of Figure 3, and models where the parameters of the deep neural networks are random variables themselves, i.e., Bayesian neural networks.

To express both types of models in DeepStan, we need a way to define deep neural networks. One possible option is to rely on existing Stan features and manually implement the networks using primitive data types like vectors and matrices. This approach may be difficult to scale to larger networks, and the models quickly become difficult to maintain. We propose instead to import network definitions written using a state-of-the-art deep learning framework, PyTorch, and compile the models to Pyro, which is already tightly integrated with PyTorch.

5.1 Importing Network Definitions

We added a new block networks to import neural networks definitions. A network is introduced by the name of its class and a variable name. This variable can then be used in subsequent blocks, in particular the model block and the guide block (e.g., the VAE of Figure 3).

The network class must be implemented in PyTorch and the associated variable must be a valid instance of the class. The programmer can thus modify a network before the inference (e.g., use existing embeddings, or transfer already learned parameters for warm-start).

5.2 Lifting Parameters

Our language extension allows the programmer to lift parameters of a neural network to random variables to create a Bayesian neural network. Figure 5

shows a simple classifier for handwritten digits based on a multi-layer perceptron (MLP) where all the parameters are lifted to random variables. Compared to the networks used in the VAE (Figure 

3), notice that the parameters (regrouped under the variable 

) are represented using a circle to indicate random variables.

The inference starts from prior beliefs about the parameters and learns distributions that fit observed data. We can then sample concrete weights and biases to obtain a concrete MLP. In fact, we do not need to stop at a single MLP: we can sample an ensemble of as many MLPs as we like. Then, we can feed a concrete image to all the sampled MLPs to get their predictions, followed by a vote.

Figure 5:

Graphical model of the Bayesian network.

Figure 6 shows the corresponding DeepStan code, and Figure 7 shows the PyTorch implementation of the neural network declared in the networks block.

networks {
  MLP mlp;
data {
  int<lower=0, upper=1> img[28, 28];
  int<lower=0, upper=9> label;
parameters {
  real mlp.l1.weight[_];
  real mlp.l1.bias[_];
  real mlp.l2.weight[_];
  real mlp.l2.bias[_];
model {
  real logits[10];
  mlp.l1.weight ~ Normal(0, 1);
  mlp.l1.bias ~ Normal(0, 1);
  mlp.l2.weight ~ Normal(0, 1);
  mlp.l2.bias ~ Normal(0, 1);
  logits = mlp(img);
  label ~ CategoricalLogits(logits);
guide parameters {
  real w1_loc[_]; real w1_scale[_];
  real b1_loc[_]; real b1_scale[_];
  real w2_loc[_]; real w2_scale[_];
  real b2_loc[_]; real b2_scale[_];
guide {
  mlp.l1.weight ~ Normal(w1_loc, exp(w1_scale));
  mlp.l1.bias ~ Normal(b1_loc, exp(b1_scale));
  mlp.l2.weight ~ Normal(w2_loc, exp(w2_scale));
  mlp.l2.bias ~ Normal(b2_loc, exp(b2_scale));
Figure 6: A simple Bayesian network in DeepStan.
class MLP(nn.Module):
  def __init__(self, nx, nh, ny):
    super(MLP, self).__init__()
    self.l1 = torch.nn.Linear(nx, nh)
    self.l2 = torch.nn.Linear(nh, ny)
  def forward(self, img):
    x = img.view((-1, nx))
    h = relu(self.l1(x))
    logits = log_softmax(self.l2(h))
    return logits
mlp = MLP(nx, nh, ny)
Figure 7: A simple neural network in PyTorch.

Lifted parameters are declared in the parameters block as any other random variables. Network parameters are identified by the name of the network and a path, e.g., mlp.l1.weight. We use PyTorch naming conventions to identify the network parameters. The model assumes a prior distribution for the weights and biases of the two linear layers of the MLP. Then, for each image, the computed label follows a Categorical distribution parameterized by the output of the network.333A Categorical distribution associates a probability to the 

possible values of a discrete random variable.

5.3 Compilation of Bayesian networks

To compile Bayesian networks we use the Pyro primitive random_module that takes a PyTorch network and a dictionary of prior distributions and turns the network into a distribution of networks where each parameter is sampled from the corresponding prior distribution.

We treat network parameters as any other random variables, that is, we apply the compilation scheme described in Section 3. A uniform prior is applied to all the lifted parameters. The distributions can be adapted in the model block using the ~ operator. For instance, in Figure 6

the model assumes a normal distribution for all the parameters. The fist part of Figure 

8 shows the result of the compilation of the model defined in Figure 6. We use the PyTorch state_dict primitive to access the network parameters to compile the conditioning statements involving network parameters.

def model(img, label):
  priors = {’l1.weight’: ImproperUniform(nh, nx), ’l1.bias’: ImproperUniform(nh),
            ’l2.weight’: ImproperUniform(ny, nh), ’l2.bias’: ImproperUniform(ny)}
  lifted_mlp = pyro.random_module("mlp", mlp, priors)()
  params = lift_mlp.state_dict()
  pyro.sample(’l1.weight’, Normal(zeros(nh, nx), ones(nh, nx)), obs=params[’l1.weight’])
  pyro.sample(’l1.bias’, Normal(zeros(nh), ones(nh)), obs=params[’l1.bias’])
  pyro.sample(’l2.weight’, Normal(zeros(ny, nh), ones(ny, nh)), obs=params[’l2.weight’])
  pyro.sample(’l2.bias’, Normal(zeros(ny), ones(ny)), obs=params[’l2.bias’])
  lhat = lifted_mlp(img)
  pyro.sample("obs", Categorical(logits=lhat), obs=lbls)
def guide(img, label):
  w1_loc = pyro.param("w1_loc", torch.randn((nh, nx)))
  w1_scale = pyro.param("w1_scale", torch.randn((nh, nx)))
  b1_loc = pyro.param("b1_loc", torch.randn(nh))
  b1_scale = pyro.param("b1_scale", torch.randn(nh))
  w2_loc = pyro.param("w2_loc", torch.randn((ny, nh)))
  w2_scale = pyro.param("w2_scale", torch.randn((ny, nh)))
  b2_loc = pyro.param("b2_loc", torch.randn(ny))
  b2_scale = pyro.param("b2_scale", torch.randn(ny))
  priors = {’l1.weight’: Normal(w1_loc, exp(w1_scale)), ’l1.bias’: Normal(b1_loc, exp(b1_scale)),
            ’l2.weight’: Normal(w2_loc, exp(w2_scale)), ’l2.bias’: Normal(b2_loc, exp(b2_scale))}
  lifted_mlp = pyro.random_module("mlp", mlp, priors)()
Figure 8: Compilation of the Bayesian MLP with the guide.

Note that it is also possible to mix probabilistic parameters and non-probabilistic parameters. In Pyro, only the parameters that appear in the prior dictionary are lifted to random variables. In DeepStan, we only lift the parameters that are declared in the parameters block.

Compiling the guide.

We apply the compilation scheme described in Section 4. Variational parameters declared in the guide parameters block are compiled to a learnable PyTorch parameter. Since they obey the restriction listed in Section 4, we can directly lift the network using the distribution defined in the guide block. Each ~ statement associated to a network parameter is added to the dictionary of priors used by random_module. The second part of Figure 8 shows the result of the compilation of the guide defined in Figure 6.

6 Experiments

This section evaluates DeepStan on multiple examples. For basic examples, we run inference on the generated Pyro code using NUTS and compare the results with Stan. We show that using explicit VI on these examples gives comparable or even more accurate results. Finally, for deep probabilistic models, we compare the generated Pyro code against hand-written code and find comparable results.

6.1 Basic Examples

(a) Coin example
(b) Double normal
(c) Multimodal
Figure 9: Plots of the distributions obtained for the basic examples. Comparing Stan/DeepStan (left), Stan/DeepStan VI (center) and DeepStan/DeepStan VI (right).
(a) Coin (b) Double Normal (c) Multimodal
Stan DeepStan DeepStan VI Stan DeepStan DeepStan VI Stan DeepStan DeepStan VI
mean 0.249 0.247 0.257 1000.0 1000.0 1000.1 1.1 1.1 0.9
std 0.121 0.124 0.114 0.7 0.7 0.7 1.4 1.4 1.4
min 0.015 0.010 0.016 997.4 997.9 997.6 -3.4 -3.0 -2.1
25% 0.161 0.149 0.171 999.6 999.5 999.6 0.0 0.0 -0.3
50% 0.236 0.234 0.246 1000.0 1000.0 1000.0 1.0 1.1 0.5
75% 0.320 0.326 0.331 1000.5 1000.5 1000.5 2.1 2.2 2.1
max 0.794 0.748 0.686 1002.9 1002.6 1002.7 5.3 5.2 4.8
Figure 10: Distributions summary for the basic examples.

Our first example is the coin model presented in Figure 1. We compare the distribution inferred by Stan and DeepStan (via Pyro) with and without VI (we use the guide presented in Section 2.1). Figure 9(a) and Figure 10(a) show the results. In each of these three plots, the two distributions nicely overlap, which indicates comparable results.

To validate the compilation scheme presented in Section 3 we implemented a model that samples the same normal distribution twice:

parameters {
  real theta;
model {
  theta ~ Normal(1000.0, 1.0);
  theta ~ Normal(1000.0, 1.0);

The exact posterior distribution for this model is a Normal distribution with parameters and . Again, we see in Figure 9(b) that Stan and DeepStan return comparable results. The distribution details presented in Figure 10(b) show that Stan, and DeepStan with and without VI, are able to accurately compute the parameter of the distribution.

The third example compares Stan and DeepStan on a simple multimodal distribution. The model is a mixture of two Gaussians with different means but identical variance:

parameters {
  real cluster;
  real theta;
model {
  real mu;
  cluster ~ Normal(0, 1);
  if (cluster > 0) { mu = 2; }
  else { mu = 0; }
  theta ~ Normal(mu, 1);

Figure 9(c) shows that the result of Stan and DeepStan are comparable: using NUTS they both fail to identify the two modes. This is a known limitation of HMC. On the other hand, using explicit VI, we can provide a custom guide that will correctly infer the two clusters. Note, however, that this approach requires a-priori knowledge on the shape of the true posterior.

guide parameters {
  real mu_cluster;
  real mu1; real mu2;
  real log_sigma1; real log_sigma2;
guide {
  cluster ~ Normal(mu_cluster, 1);
  if (cluster > 0) {
    theta ~ Normal(mu1, exp(log_sigma1));
  } else {
    theta ~ Normal(mu2, exp(log_sigma2));

Execution Time.

The table below summarizes the execution time for the basic examples experiments running on a MacBook Pro with 4 cores i7 of 2.2 GHz. Stan first compiles the model to C++, which takes significant time, but the inference is then impressively fast (less than a second on all experiments). In comparison, the compilation from DeepStan to Pyro is quasi-instantaneous, but the Pyro version of NUTS is slower (around 10s, except for the pathological case of the multimodal distribution). Except for the simple coin example, VI is typically much slower than NUTS, but is able to compute the multimodal distribution.

Stan DeepStan
Compilation Inference NUTS VI
Coin 69.4 0.1 11.3 17.5
Double Normal 67.7 0.2 16.9 177.3
Multimodal 78.1 0.7 8073.1 522.8

6.2 Deep probabilistic models

Since Stan lacks support for deep probabilistic models, we could not use it as a baseline for experimental comparison. Instead, we compare the performance of the code generated by our compiler with hand-written Pyro code. We consider two examples: the VAE described in Section 2.2 and the Bayesian MLP described in Section 5.2.


Variational Autoencoders were not designed as a predictive model but as a generative model to reconstruct images. Evaluating the performance of a VAE is thus non-obvious. We use the following experimental setting. We trained two VAEs on the MNIST dataset using VI: one hand-written in Pyro, the other written in DeepStan. For each image in the test set, the trained VAEs compute a latent representation of dimension 5. We then cluster these representations using KMeans with 10 clusters. Then we measure the performance of a VAE with the pairwise F1 metric: true positives are the number of images of the same digit that appear in the same cluster.

The table below presents the scores of the two VAEs. These numbers shows that compiling DeepStan to Pyro does not impact the performance of such deep probabilistic models.

F1 Precision Recall
Pyro 0.41 0.43 0.40
DeepStan 0.43 0.44 0.42

Bayesian MLP.

Similarly, we trained two version of the Bayesian MLP described in Section 5.2

: one hand-written in Pyro, the other written in DeepStan. We then trained both models for 10 epochs on the training set. Training for only 10 epochs means the models make more mistakes, which is useful for evaluating whether their mistakes agree.

On the test set, the accuracy of the Pyro model is  and the accuracy of the DeepStan model is . To compare the models on incorrect prediction we computed the agreement between the two predictions regardless of the correct value. The mean agreement between the models is , that is, they make similar mistakes. Again, these numbers show that compiling DeepStan models to Pyro has little impact on the performance of the models.

7 Related Work

In the literature, deep probabilistic models are designed as probabilistic models, e.g., Variational Auto-Encoder (VAE) Kingma et al. (2014)

or Deep Markov Model (DMM) 

Krishnan et al. (2017)). But the reference implementations often rely on ad-hoc encoding in existing deep learning frameworks. This approach can be partly explained by the wide adoption of these frameworks.

In recent years, taking advantage of the maturity of deep learning frameworks, multiple deep probabilistic programming languages have been proposed: Edward Tran et al. (2017) and ZhuSuan Shi et al. (2017)

built on top of TensorFlow, Pyro 

Uber (2017) and ProbTorch Siddharth et al. (2017) built on top of PyTorch, and PyMC3 Salvatier et al. (2016)

built on top of Theano. All these languages are implemented as libraries. The users thus need to master the entire technology stack of the library, the underlying deep-learning framework, and the host language (typically Python).

Stan Carpenter et al. (2017), on the other hand, provides a standalone language and a compiler with dedicated static analyses to ease the design of probabilistic models. However, this design choice makes it difficult to interact with existing deep learning libraries, and advanced neural networks cannot be easily implemented using only Stan primitive constructs or trained using only Stan samplers.

Our proposal aims at lowering the entry bar to deep probabilistic programming for the active community of Stan users. Our language extension is designed to be as non-invasive as possible. Stan users can thus start experimenting with our Pyro backend with simple examples, then move on examples that involve off-the-shelf neural networks in PyTorch, and finally graduate to writing their own neural networks.

In addition, our language extension can express variational inference guides. This is of particular interest for deep probabilistic models where guides must often be carefully crafted Kingma & Welling (2013); Krishnan et al. (2017).

As mentioned in Section 2, Stan offers a black-box version of variational inference called ADVI Blei et al. (2017) that automatically synthesizes the guide from the model. The main idea is to generate a guide that samples each parameter from a normal distribution parameterized by two variational parameters (location and scale). This synthesis scheme satisfies all the restriction listed in Section 4. It would thus be straightforward to implement ADVI in our Pyro backend.

8 Conclusion

This paper introduces DeepStan, a language and compiler for deep probabilistic programming. The language is a superset of Stan, extending it for variational inference and for interfacing with deep neural networks written in PyTorch. The compiler translates both the existing Stan language and our extensions to Pyro. Before our work, data scientists could only pick two out the following three qualities: (1) deep neural networks for hierarchical representation learning, (2) probabilistic models for handling uncertainty with rigor, and (3) high-level languages for ease of use. This paper shows how to bring all three together at last.


  • Blei et al. (2017) Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
  • Carpenter et al. (2017) Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. Stan: A probabilistic programming language. Journal of Statistical Software, 76(1):1–37, 2017.
  • Facebook (2016) Facebook. Pytorch, 2016. URL http://pytorch.org/. (Retrieved September 2018).
  • Ghahramani (2015) Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452–459, May 2015.
  • Goodman et al. (2012) Goodman, N., Mansinghka, V., Roy, D. M., Bonawitz, K., and Tenenbaum, J. B. Church: a language for generative models. arXiv:1206.3255, 2012.
  • Goodman & Stuhlmüller (2014) Goodman, N. D. and Stuhlmüller, A. The Design and Implementation of Probabilistic Programming Languages, 2014. Retrieved September 2018.
  • Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. arXiv:1312.6114, 2013.
  • Kingma et al. (2014) Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.
  • Krishnan et al. (2017) Krishnan, R. G., Shalit, U., and Sontag, D. Structured inference networks for nonlinear state space models. In AAAI, pp. 2101–2109, 2017.
  • Kucukelbir et al. (2015) Kucukelbir, A., Ranganath, R., Gelman, A., and Blei, D. Automatic variational inference in Stan. In NIPS, pp. 568–576, 2015.
  • LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553):436–444, May 2015.
  • Milch et al. (2007) Milch, B., Marthi, B., Russell, S., Sontag, D., Ong, D. L., and Kolobov, A. BLOG probabilistic models with unknown objects. Statistical relational learning, pp. 373, 2007.
  • Neal (2012) Neal, R. M. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
  • Pfeffer (2001) Pfeffer, A. Ibal: A probabilistic rational programming language. In IJCAI, pp. 733–740, 2001.
  • Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D.

    Stochastic backpropagation and approximate inference in deep generative models.

    In ICML, pp. 1278–1286, 2014.
  • Salvatier et al. (2016) Salvatier, J., Wiecki, T. V., and Fonnesbeck, C. Probabilistic programming in python using PyMC3. PeerJ Computer Science, 2:e55, 2016.
  • Shi et al. (2017) Shi, J., Chen, J., Zhu, J., Sun, S., Luo, Y., Gu, Y., and Zhou, Y. Zhusuan: A library for bayesian deep learning. arXiv:1709.05870, 2017.
  • Siddharth et al. (2017) Siddharth, N., Paige, B., van de Meent, J.-W., Desmaison, A., Goodman, N. D., Kohli, P., Wood, F., and Torr, P. Learning disentangled representations with semi-supervised deep generative models. In NIPS, pp. 5927–5937, 2017.
  • Stan Development Team (2016) Stan Development Team. Stan Modeling Language Users Guide and Reference Manual, version 2.14. 0. 2016.
  • Tolpin et al. (2015) Tolpin, D., van de Meent, J.-W., and Wood, F. Probabilistic programming in anglican. In ECML PKDD, pp. 308–311. Springer, 2015.
  • Tran et al. (2017) Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., and Blei, D. M. Deep probabilistic programming. In International Conference on Learning Representations (ICLR), 2017.
  • Uber (2017) Uber. Pyro, 2017. URL http://pyro.ai/. (Retrieved September 2018).