deepppl
Deep Probabilistic Programming Language
view repo
Deep probabilistic programming combines deep neural networks (for automatic hierarchical representation learning) with probabilistic models (for principled handling of uncertainty). Unfortunately, it is difficult to write deep probabilistic models, because existing programming frameworks lack concise, highlevel, and clean ways to express them. To ease this task, we extend Stan, a popular highlevel probabilistic programming language, to use deep neural networks written in PyTorch. Training deep probabilistic models works best with variational inference, so we also extend Stan for that. We implement these extensions by translating Stan programs to Pyro. Our translation clarifies the relationship between different families of probabilistic programming languages. Overall, our paper is a step towards making deep probabilistic programming easier.
READ FULL TEXT VIEW PDF
Deep probabilistic programming languages try to combine the advantages o...
read it
Probabilistic modeling enables combining domain knowledge with learning ...
read it
Probabilistic programming languages are valuable because they allow us t...
read it
We present the preliminary highlevel design and features of DynamicPPL....
read it
Advances in variational inference enable parameterisation of probabilist...
read it
Epidemiology simulations have become a fundamental tool in the fight aga...
read it
The aim of probabilistic programming is to automatize every aspect of
pr...
read it
Deep Probabilistic Programming Language
Ideally, artificialintelligence models would be both highly accurate and easy to understand. But while both accuracy and understandability can be accomplished separately, they are hard to combine in the same model. On the one hand, deep neural networks can become highly accurate by learning from largescale data to reshape distributions in intricate ways
LeCun et al. (2015). On the other hand, probabilistic programming can model how domain experts understand data is generated based on realworld concepts with overt uncertainty Ghahramani (2015). A recent research paper has proposed deep probabilistic programming to combine these advantages Tran et al. (2017). We aim at bringing deep probabilistic programming to the mainstream.For our starting point, we look at mainstream technology in both fields. Stan is a popular probabilistic programming language that lets users write probabilistic models in a highlevel, selfcontained syntax and static type system Carpenter et al. (2017). PyTorch is a popular framework for deep neural networks in Python that can learn sophisticated distributions from largescale data Facebook (2016).
This paper shows how to extend Stan with deep neural networks. Instead of implementing a deep learning framework for Stan to add support for deep probabilistic programming, we chose to compile Stan programs to Pyro
Uber (2017), a deep probabilistic programming language built on top of PyTorch that is currently in a Beta release. This approach has the following advantages: (1) Pyro is built on top of PyTorch, the programmer can thus seamlessly import neural networks designed with the stateoftheart API provided by PyTorch. (2) Variational inference was central in the design of Pyro. Programmers can easily craft their own inference guides which is often critical to run variational inference on deep probabilistic models. (3) Pyro also offers alternative inference methods, like NUTS. We can thus compare the results of our approach with the original Stan implementation on classic probabilistic models.From one perspective, we contribute a new backend for Stan that can learn highly accurate distributions from largescale data. From another perspective, we contribute a new frontend for Pyro that makes it easy for domain experts to express understandable models in a selfcontained language.
For our language and compiler, we had to overcome two technical obstacles: variational inference and translating the semantics of Stan programs to Pyro.
Variational inference (VI) is a method for training probabilistic models. For decades, the dominant method for training probabilistic models has been Markovchain Monte Carlo (MCMC) sampling. But VI is easier to scale to the large data needed in training the complex nonlinear distribution shapes involved in deep probabilistic models
Blei et al. (2017). Thus, MCMC is the default in Stan and VI is the default in Pyro. VI posits a family of densities for posteriors of variables to be learned, then finds the member that is closest to the data. The user must specify the family , also known as the variational guide. We extend Stan with a highlevel syntax for specifying variational guides.Although both Stan and Pyro are termed probabilistic programming languages, they belong to two different families. In Pyro, programs are expressed as generative models with conditioning statements. It is similar to BLOG (Milch et al., 2007), Church (Goodman et al., 2012), IBAL (Pfeffer, 2001), Anglican (Tolpin et al., 2015), or WebPPL (Goodman & Stuhlmüller, 2014). On the other hand, in Stan, a program describes a logdensity function that will be optimized during the inference given a set of data. We thus had to design a compilation scheme that bridges the gap between these two language families.
This paper makes the following contributions:
Extending Stan for variational inference with highlevel but explicit guides.
Extending Stan with a clean interface to neural networks written in Python.
A compiler from Stan to Pyro that carefully chooses the appropriate sampling semantics.
To evaluate our language and compiler, we implemented a variety of probabilistic models (both traditional and deep). We train them with both MCMC and VI, in both Stan and Pyro, where applicable. Our results show that the training yields almost the same probability distributions modulo implementationspecific deviations. Overall, this paper takes a substantial step towards making the combination of deep and probabilistic models more usable.
This section uses examples to illustrate our Stan language extensions, DeepStan, which enable explicit variational inference and probabilistic models that involve deep neural networks.
We start with a simple Stan program from Section 2 of the Stan manual Stan Development Team (2016)
that uses neither of our extensions. The program estimates the bias of a coin from a set of coin tosses. We write
if the result of the ith coin toss is head andotherwise. We assume that individual coin tosses are independent and identically distributed (IID) and that each coin toss follows a Bernoulli distribution with parameter
, that is, and . The task is to infer the distribution of the latent (unobserved) variable given the observations of the variables . Figure 1 shows the corresponding Stan program and graphical model.The data block introduces observations, which are placeholders for concrete data to be provided as input to the inference procedure. The parameters
block introduces latent random variables, which will not be provided and must be inferred. The graphical model shows observed variables gray and latent variables white. All variables are explicitly typed and the type can also declare constraints on their possible values. For example,
zis a continuous random variable that can take any value between
and . The model block describes the model. It places a (i.e., uniform) prior on , that is, initially can take all values between and with equal probability. The model then assumes that each coin toss x[i] follows a Bernoulli distribution with parameterusing a vectorized statement which can also be written:
We can then launch the inference methods offered by Stan to fit the model to the provided data, thus learning the value of latent variable . By default, Stan uses NUTS (NoUTurn Sampler), which is an optimized HMC (Hamiltonian MonteCarlo) algorithm, which in turn is a form of MCMC.
Variational inference turns the inference problem into an optimization problem and tends to be faster and more adapted to large datasets than MCMC methods Blei et al. (2017).
Variational inference tries to find the member of a family of simpler distributions that is the closest to the true posterior . Members of the family are characterized by the values of the variational parameter . The fitness of a candidate is measured using the KullbackLeibler (KL) divergence from the true posterior, a similarity measure between probability distributions.
In the following, we call guide the distribution family used to guide the variational inference.
Proposing a guide that can be used to compute meaningful results can be challenging. Stan offers a form of blackbox variational inference called ADVI Kucukelbir et al. (2015) that automatically synthesizes a suitable guide. ADVI supports a broad class of models. However, for some models, in particular deep probabilistic models, the user may want to manually craft their own guide.
To allow the user to design their own guide, we extended DeepStan with two new blocks: guide parameters and guide. The guide block defines a distribution parameterized by the guide parameters. Variational inference then optimizes the values of these parameters to approximate the true posterior. Note that unlike Stan parameters that define random variables to be used in the model, guide parameters are the hyperparameters characterizing the family of distributions that will be optimized during inference.
Going back to the coin example, we can now explicitly ask the inference to find the best Beta distribution to approximate the true posterior.
^{1}^{1}1In this particular case, the exact posterior is a Beta distribution. ⬇ guide parameters { real<lower=0> alpha_q; real<lower=0> beta_q; } guide { z ~ Beta(alpha_q, beta_q); } Inference then computes the optimal value of parameters and given the data. Notice that the graphical model represents guide parameters with rectangles to indicate that they are not random variables.Deep probabilistic models are probabilistic model involving neural networks. Variational AutoEncoders (VAE) Kingma & Welling (2013); Rezende et al. (2014) are a popular example of such models.
The task is learning a vectorspace representation for each observed data point
(e.g., a handwritten digit in the the classic MNIST dataset). The computed representation can then be use as input of other models (e.g., a classifier). Each data point
depends on the latent representation in a complex nonlinear way, via a deep neural network: the decoder. The top half of Figure 2 shows the corresponding graphical model. The output of the decoder is a vector that parameterizes a Bernoulli distribution over each pixel in the image . Each pixel is thus associated to a probability of being present in the image. The parameter of the decoder is global (i.e., shared by all data points).The main idea of the VAE is to use variational inference to learn the latent representation. The guide maps each to a latent variable via another neural network: the encoder. The bottom half of Figure 2 shows the graphical model of the guide. The network encoder returns, for each image , the parameters and
of a Gaussian distribution in the latent space. Again the parameter
of the network is global. Then inference tries to learn good values for the parameters and , simultaneously training the decoder and the encoder, according to the data.Figure 3 shows the corresponding DeepStan code. We added a block networks to import neural networks definitions. The network class must be implemented in PyTorch. Figure 3 omits the dimension of the latent representation real z[_]. This information is already contained in the networks definition. To avoid errorprone redundant definitions, we implemented a shape inference that automatically computes missing dimensions (denoted by ’_’) when possible.
Section 5 will show that, in addition to using neural networks in the model, we can also lift the parameters of the network to random variables to obtain a Bayesian neural network Neal (2012). This is a neural network with weight uncertainty, whose parameters are probability distributions learnt during inference. It is then possible to sample from the computed distribution to obtain concrete instances of the network.
Figure 4 shows the grammar of DeepStan. As in Stan, a program is a succession of blocks. Objects defined in one block can be used in subsequent blocks. The nonterminal funs, data, tdata, params, tparams, and gquant corresponds to the original Stan blocks (respectively functions, data, transformed data, parameters, transformed parameters, and generated quantities). The only mandatory block is model.
DeepStan extends Stan with three additional optional blocks. Block networks declares neural networks at the beginning of the pipeline. Blocks guide parameters and guide specify an explicit guide for variational inference after the model (see Section 4).
The nonterminals vdecl and stmt corresponds to variable declarations (type, name, dimensions) and statements. We omit their definition here for space reasons. The complete grammar of the Stan language can be found in Appendix C of the manual Stan Development Team (2016).
This section shows how to compile Stan programs into Pyro generative models. We give the intuition of the compilation technique on simple examples.
In Stan, the model
block represents the log of a probability density function (logdensity function or lpdf)
^{2}^{2}2It operates in logarithm space to avoid numerical imprecision. on the constrained parameter space defined in the parameters block Carpenter et al. (2017). Given the value of the parameters, the logdensity function provides the relative likelihood of the values of the random variables declared in the data block. A Stan program can thus be viewed as a function from parameters and data to the value of a special variable target that represents the logdensity. Given a set of data points, the inference tries to find the optimal parameters that maximize the value of target.A Stan model can be described using classic imperative statements (sequence, condition, loop), plus two special statements that modify the value of target. The first one, target+=, increments the value of target. The second one, expr ~ D(params), is semantically equivalent to target+=, where is a probability distribution and is the corresponding lpdf.
In Pyro, the target function is a hidden state of the program that can be updated only through the pyro.sample primitive. To understand the compilation from Stan to Pyro, we first need to understand the semantics of pyro.sample.
The expression pyro.sample(name, D(params)) does three things: first, draw a sample value from the distribution ; second, add to the logdensity accumulator; and third, return value. Adding to the logdensity accumulator is sometimes called factor or conditioning in the literature, and amounts to a soft constraint: it penalizes unlikely executions. The user must specify a unique name as a Python string. This name can be used by the programmer outside of the model to query the random variable corresponding to this sampler, and is also used internally by the inference algorithms.
The expression pyro.sample(name, D(z), obs=value) differs from the variant without the optional obs argument. Instead of drawing a sample value, it uses the given concrete value. In other words, this expression just applies the factor, that is, updates the logdensity accumulator. In a typical Pyro program, it is used for observed variables, using the concrete values provided in the training dataset.
Since the Stan ~ statement only updates target and does not draw a sample from a distribution, we translate it to a pyro.sample call with the optional obs argument. For example, we translate the Stan code z ~ Beta(1,1) into the Pyro code pyro.sample(’z1’, Beta(1.,1.), obs=z). Similarly, we translate x ~ Bernoulli(z) into pyro.sample(’x’, Bernoulli(z), obs=x).
In Pyro, inference does not optimize a set of parameters declared outside the model, but rather optimize the random variables that are not observed (pyro.sample expressions without obs argument). So, we are compiling Stan parameters as unobserved sampled variables. For example, the declaration of the parameter real<lower=0,upper=1> z; is compiled to: z = pyro.sample(’z’, Uniform(0., 1.))
We choose to sample from the uniform distribution to draw a value in the definition domain of the parameter and because the logdensity of the uniform distribution of parameters
and is . The previous code thus applies the following update to the target: target +=. Since that value is constant (it is the same in all executions), inference will normalize it away.Putting it all together, a Stan model is compiled to a Python function whose parameters are the data. For instance, Figure 1 is compiled to:
We had to solve several difficulties to compile more advanced Stan models to Pyro.
The previous example is well defined because the constraints on define a bounded support (here ). But in Stan, it is also possible to partially constrain parameters. Consider the following parameter definition:
In the generated code, we add implicit uniform priors to all the parameter, and these implicit uniform priors are improper if the variable has unbounded support Carpenter et al. (2017)(Section 4.5), that is, the density function does not integrate to . Formally we should then add to the target. To overcome this issue, we introduce new adhoc distributions to sample semiconstrained or unconstrained parameters without incrementing the target. The previous example of parameter declaration is thus compiled to:
In Stan the lefthand side of the ~ operator is an arbitrary expression. For example the following is valid Stan code:
This is not an issue in the generated code, since the ~ operator is compiled to a Pyro observation, which can also be an arbitrary expression. The previous example is compiled to:
As in Stan, this also permits multiple updates of the same parameter, which are compiled as a sequence of Pyro observations on the same parameter.
Finally, in Stan, it is also possible to directly udpate the target function of the model using the reserved target variable:
Pyro does not directly expose the target function to the programmer. To translate this statement, we use the exponential distribution with parameter
whose logdensity function is . Observing an expression from this distribution applies the update . The previous model thus compiles to:As explained in Section 2.1, variational inference approximates the true posterior with the closest member of a family of simpler distributions , the guide. Members of the guide family are characterized by the value of the variational parameters . Inference optimizes these parameters to find the best candidate to approximate the true posterior: .
We extend DeepStan with two new blocks: guide parameters and guide. Variational parameters, defined in the guide parameters block, can be used in the guide block that describes the family . For instance, a guide for the model presented in Figure 1 is:
Inference then tries to compute the optimal values of the parameters and such that is as close as possible to the true posterior of the model .
The guide parameters and guide blocks follow the syntax of the parameters and model blocks, respectively. But we need additional restrictions to be able to generate Pyro code.
First, the optimization problem solved by the inference must be well defined, that is, the guide must be defined on the same parameter space as the model. All the parameters of the model must thus be sampled in the guide. The DeepStan compiler checks this restriction statically for early error reporting.
Second, the guide should describe a distribution from which we can directly draw valid samples without running the inference first. In particular, compared to Section 3, we cannot interpret the guide as a logdensity function and run inference to find the parameters and then draw samples from the inferred distribution. This requirement further limits what can be expressed in the guide block.
Consider for instance the example of Section 3: log(x) ~ Normal(0,1). In theory, we could invert the log function and sample from . However, in the general case, we do not know how to invert arbitrary expressions, or even which variable to choose to compute the inverse. For example should we invert x  mu ~ Normal(0, 1) with respect to or ? To overcome these difficulties, the guide block only allows single variables on the lefthand side of the ~ operator.
Similarly, the guide block must not sample the same variable twice. For instance, the following sequence defines a valid model, but would be rejected in a guide.
In a model block, this sequence adds to the target, which is equivalent to x ~ Normal(0, sqrt(0.5)) (which adds ). Again, the computations required to simplify these expressions are in the general case unpractical and thus forbidden in the guide block.
Finally, the DeepStan compiler rejects explicit modifications of target, which is only used during inference.
Variational parameters declared in the guide parameters block are learned during inference. In Pyro, they must be registered using the pyro.param statement which creates a PyTorch learnable parameter from a unique name and an initial value. For instance, the guide parameters block of the coin model is compiled to:
By default, the initial values of the parameters are sampled from a uniform distribution (torch.randn). The programmer can also choose arbitrary values that are assigned to the parameters before the inference.
With the restrictions discussed in the previous section, the compilation of the guide block becomes straightforward. The ~ operator can be directly compiled to a Pyro sample statement without observations. In Pyro, the guide is a python function that takes the same arguments as the model. For example, the guide of the coin model is compiled to:
Note that all the parameters defined in the parameters block are sampled in the guide.
The restrictions imposed on the guide do not prevent the guide from being arbitrarily complex. The guide of the VAE example shown in Figure 3 is a deep probabilistic model that involves a neural network.
Deep probabilistic models can be grouped into two broad categories: models involving deep neural networks to capture complex dynamics between random variables, e.g., the VAE of Figure 3, and models where the parameters of the deep neural networks are random variables themselves, i.e., Bayesian neural networks.
To express both types of models in DeepStan, we need a way to define deep neural networks. One possible option is to rely on existing Stan features and manually implement the networks using primitive data types like vectors and matrices. This approach may be difficult to scale to larger networks, and the models quickly become difficult to maintain. We propose instead to import network definitions written using a stateoftheart deep learning framework, PyTorch, and compile the models to Pyro, which is already tightly integrated with PyTorch.
We added a new block networks to import neural networks definitions. A network is introduced by the name of its class and a variable name. This variable can then be used in subsequent blocks, in particular the model block and the guide block (e.g., the VAE of Figure 3).
The network class must be implemented in PyTorch and the associated variable must be a valid instance of the class. The programmer can thus modify a network before the inference (e.g., use existing embeddings, or transfer already learned parameters for warmstart).
Our language extension allows the programmer to lift parameters of a neural network to random variables to create a Bayesian neural network. Figure 5
shows a simple classifier for handwritten digits based on a multilayer perceptron (MLP) where all the parameters are lifted to random variables. Compared to the networks used in the VAE (Figure
3), notice that the parameters (regrouped under the variable) are represented using a circle to indicate random variables.
The inference starts from prior beliefs about the parameters and learns distributions that fit observed data. We can then sample concrete weights and biases to obtain a concrete MLP. In fact, we do not need to stop at a single MLP: we can sample an ensemble of as many MLPs as we like. Then, we can feed a concrete image to all the sampled MLPs to get their predictions, followed by a vote.
Figure 6 shows the corresponding DeepStan code, and Figure 7 shows the PyTorch implementation of the neural network declared in the networks block.
Lifted parameters are declared in the parameters block as any other random variables. Network parameters are identified by the name of the network and a path, e.g., mlp.l1.weight. We use PyTorch naming conventions to identify the network parameters. The model assumes a prior distribution for the weights and biases of the two linear layers of the MLP. Then, for each image, the computed label follows a Categorical distribution parameterized by the output of the network.^{3}^{3}3A Categorical distribution associates a probability to the
possible values of a discrete random variable.
To compile Bayesian networks we use the Pyro primitive random_module that takes a PyTorch network and a dictionary of prior distributions and turns the network into a distribution of networks where each parameter is sampled from the corresponding prior distribution.
We treat network parameters as any other random variables, that is, we apply the compilation scheme described in Section 3. A uniform prior is applied to all the lifted parameters. The distributions can be adapted in the model block using the ~ operator. For instance, in Figure 6
the model assumes a normal distribution for all the parameters. The fist part of Figure
8 shows the result of the compilation of the model defined in Figure 6. We use the PyTorch state_dict primitive to access the network parameters to compile the conditioning statements involving network parameters.Note that it is also possible to mix probabilistic parameters and nonprobabilistic parameters. In Pyro, only the parameters that appear in the prior dictionary are lifted to random variables. In DeepStan, we only lift the parameters that are declared in the parameters block.
We apply the compilation scheme described in Section 4. Variational parameters declared in the guide parameters block are compiled to a learnable PyTorch parameter. Since they obey the restriction listed in Section 4, we can directly lift the network using the distribution defined in the guide block. Each ~ statement associated to a network parameter is added to the dictionary of priors used by random_module. The second part of Figure 8 shows the result of the compilation of the guide defined in Figure 6.
This section evaluates DeepStan on multiple examples. For basic examples, we run inference on the generated Pyro code using NUTS and compare the results with Stan. We show that using explicit VI on these examples gives comparable or even more accurate results. Finally, for deep probabilistic models, we compare the generated Pyro code against handwritten code and find comparable results.



(a) Coin  (b) Double Normal  (c) Multimodal  

Stan  DeepStan  DeepStan VI  Stan  DeepStan  DeepStan VI  Stan  DeepStan  DeepStan VI  
mean  0.249  0.247  0.257  1000.0  1000.0  1000.1  1.1  1.1  0.9 
std  0.121  0.124  0.114  0.7  0.7  0.7  1.4  1.4  1.4 
min  0.015  0.010  0.016  997.4  997.9  997.6  3.4  3.0  2.1 
25%  0.161  0.149  0.171  999.6  999.5  999.6  0.0  0.0  0.3 
50%  0.236  0.234  0.246  1000.0  1000.0  1000.0  1.0  1.1  0.5 
75%  0.320  0.326  0.331  1000.5  1000.5  1000.5  2.1  2.2  2.1 
max  0.794  0.748  0.686  1002.9  1002.6  1002.7  5.3  5.2  4.8 
Our first example is the coin model presented in Figure 1. We compare the distribution inferred by Stan and DeepStan (via Pyro) with and without VI (we use the guide presented in Section 2.1). Figure 9(a) and Figure 10(a) show the results. In each of these three plots, the two distributions nicely overlap, which indicates comparable results.
To validate the compilation scheme presented in Section 3 we implemented a model that samples the same normal distribution twice:
The exact posterior distribution for this model is a Normal distribution with parameters and . Again, we see in Figure 9(b) that Stan and DeepStan return comparable results. The distribution details presented in Figure 10(b) show that Stan, and DeepStan with and without VI, are able to accurately compute the parameter of the distribution.
The third example compares Stan and DeepStan on a simple multimodal distribution. The model is a mixture of two Gaussians with different means but identical variance:
Figure 9(c) shows that the result of Stan and DeepStan are comparable: using NUTS they both fail to identify the two modes. This is a known limitation of HMC. On the other hand, using explicit VI, we can provide a custom guide that will correctly infer the two clusters. Note, however, that this approach requires apriori knowledge on the shape of the true posterior.
The table below summarizes the execution time for the basic examples experiments running on a MacBook Pro with 4 cores i7 of 2.2 GHz. Stan first compiles the model to C++, which takes significant time, but the inference is then impressively fast (less than a second on all experiments). In comparison, the compilation from DeepStan to Pyro is quasiinstantaneous, but the Pyro version of NUTS is slower (around 10s, except for the pathological case of the multimodal distribution). Except for the simple coin example, VI is typically much slower than NUTS, but is able to compute the multimodal distribution.
Stan  DeepStan  

Compilation  Inference  NUTS  VI  
Coin  69.4  0.1  11.3  17.5 
Double Normal  67.7  0.2  16.9  177.3 
Multimodal  78.1  0.7  8073.1  522.8 
Since Stan lacks support for deep probabilistic models, we could not use it as a baseline for experimental comparison. Instead, we compare the performance of the code generated by our compiler with handwritten Pyro code. We consider two examples: the VAE described in Section 2.2 and the Bayesian MLP described in Section 5.2.
Variational Autoencoders were not designed as a predictive model but as a generative model to reconstruct images. Evaluating the performance of a VAE is thus nonobvious. We use the following experimental setting. We trained two VAEs on the MNIST dataset using VI: one handwritten in Pyro, the other written in DeepStan. For each image in the test set, the trained VAEs compute a latent representation of dimension 5. We then cluster these representations using KMeans with 10 clusters. Then we measure the performance of a VAE with the pairwise F1 metric: true positives are the number of images of the same digit that appear in the same cluster.
The table below presents the scores of the two VAEs. These numbers shows that compiling DeepStan to Pyro does not impact the performance of such deep probabilistic models.
F1  Precision  Recall  

Pyro  0.41  0.43  0.40 
DeepStan  0.43  0.44  0.42 
Similarly, we trained two version of the Bayesian MLP described in Section 5.2
: one handwritten in Pyro, the other written in DeepStan. We then trained both models for 10 epochs on the training set. Training for only 10 epochs means the models make more mistakes, which is useful for evaluating whether their mistakes agree.
On the test set, the accuracy of the Pyro model is and the accuracy of the DeepStan model is . To compare the models on incorrect prediction we computed the agreement between the two predictions regardless of the correct value. The mean agreement between the models is , that is, they make similar mistakes. Again, these numbers show that compiling DeepStan models to Pyro has little impact on the performance of the models.
In the literature, deep probabilistic models are designed as probabilistic models, e.g., Variational AutoEncoder (VAE) Kingma et al. (2014)
or Deep Markov Model (DMM)
Krishnan et al. (2017)). But the reference implementations often rely on adhoc encoding in existing deep learning frameworks. This approach can be partly explained by the wide adoption of these frameworks.In recent years, taking advantage of the maturity of deep learning frameworks, multiple deep probabilistic programming languages have been proposed: Edward Tran et al. (2017) and ZhuSuan Shi et al. (2017)
built on top of TensorFlow, Pyro
Uber (2017) and ProbTorch Siddharth et al. (2017) built on top of PyTorch, and PyMC3 Salvatier et al. (2016)built on top of Theano. All these languages are implemented as libraries. The users thus need to master the entire technology stack of the library, the underlying deeplearning framework, and the host language (typically Python).
Stan Carpenter et al. (2017), on the other hand, provides a standalone language and a compiler with dedicated static analyses to ease the design of probabilistic models. However, this design choice makes it difficult to interact with existing deep learning libraries, and advanced neural networks cannot be easily implemented using only Stan primitive constructs or trained using only Stan samplers.
Our proposal aims at lowering the entry bar to deep probabilistic programming for the active community of Stan users. Our language extension is designed to be as noninvasive as possible. Stan users can thus start experimenting with our Pyro backend with simple examples, then move on examples that involve offtheshelf neural networks in PyTorch, and finally graduate to writing their own neural networks.
In addition, our language extension can express variational inference guides. This is of particular interest for deep probabilistic models where guides must often be carefully crafted Kingma & Welling (2013); Krishnan et al. (2017).
As mentioned in Section 2, Stan offers a blackbox version of variational inference called ADVI Blei et al. (2017) that automatically synthesizes the guide from the model. The main idea is to generate a guide that samples each parameter from a normal distribution parameterized by two variational parameters (location and scale). This synthesis scheme satisfies all the restriction listed in Section 4. It would thus be straightforward to implement ADVI in our Pyro backend.
This paper introduces DeepStan, a language and compiler for deep probabilistic programming. The language is a superset of Stan, extending it for variational inference and for interfacing with deep neural networks written in PyTorch. The compiler translates both the existing Stan language and our extensions to Pyro. Before our work, data scientists could only pick two out the following three qualities: (1) deep neural networks for hierarchical representation learning, (2) probabilistic models for handling uncertainty with rigor, and (3) highlevel languages for ease of use. This paper shows how to bring all three together at last.
Stochastic backpropagation and approximate inference in deep generative models.
In ICML, pp. 1278–1286, 2014.
Comments
There are no comments yet.