Inference Compilation and Universal Probabilistic Programming

by   Tuan Anh Le, et al.
University of Oxford

We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do "compilation of inference" because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.



page 1

page 2

page 3

page 4


Universal Marginaliser for Deep Amortised Inference for Probabilistic Programs

Probabilistic programming languages (PPLs) are powerful modelling tools ...

DeepProbLog: Neural Probabilistic Logic Programming

We introduce DeepProbLog, a probabilistic logic programming language tha...

Monolingual Probabilistic Programming Using Generalized Coroutines

Probabilistic programming languages and modeling toolkits are two modula...

Declarative Modeling and Bayesian Inference of Dark Matter Halos

Probabilistic programming allows specification of probabilistic models i...

A Programmatic and Semantic Approach to Explaining and DebuggingNeural Network Based Object Detectors

Even as deep neural networks have become very effective for tasks in vis...

Auto-Encoding Sequential Monte Carlo

We introduce AESMC: a method for using deep neural networks for simultan...

Expectation Programming

Building on ideas from probabilistic programming, we introduce the conce...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Our approach to compiled inference. Given only a probabilistic program , during compilation we automatically construct a neural network architecture comprising an LSTM core and various embedding and proposal layers specified by the probabilistic program and train this using an infinite stream of training data generated from the model. When this expensive compilation stage is complete, we are left with an artifact of weights and neural architecture specialized for the given probabilistic program. During inference, the probabilistic program and the compilation artifact is used in a sequential importance sampling procedure, where the artifact parameterizes the proposal distribution .

Probabilistic programming uses computer programs to represent probabilistic models (Gordon et al., 2014). Probabilistic programming systems such as STAN (Carpenter et al., 2015), BUGS (Lunn et al., 2000), and Infer.NET (Minka et al., 2014) allow efficient inference in a restricted space of generative models, while systems such as Church (Goodman et al., 2008), Venture (Mansinghka et al., 2014), and Anglican (Wood et al., 2014)—which we call universal—allow inference in unrestricted models. Universal probabilistic programming systems are built upon Turing complete programming languages which support constructs such as higher order functions, stochastic recursion, and control flow.

There has been a spate of recent work addressing the production of artifacts via “compiling away” or “amortizing” inference (Gershman and Goodman, 2014). This body of work is roughly organized into two camps. The one in which this work lives, arguably the camp organized around “wake-sleep” (Hinton et al., 1995)

, is about offline unsupervised learning of observation-parameterized importance-sampling distributions for Monte Carlo inference algorithms. In this camp, the approach of

Paige and Wood (2016)

is closest to ours in spirit; they propose learning autoregressive neural density estimation networks offline that approximate inverse factorizations of graphical models so that at test time, the trained “inference network” starts with the values of all observed quantities and progressively proposes parameters for latent nodes in the original structured model. However, inversion of the dependency structure is impossible in the universal probabilistic program model family, so our approach instead focuses on learning proposals for “forward” inference methods in which no model dependency inversion is performed. In this sense, our work can be seen as being inspired by that of

Kulkarni et al. (2015) and Ritchie et al. (2016b) where program-specific neural proposal networks are trained to guide forward inference. Our aim, though, is to be significantly less model-specific. At a high level what characterizes this camp is the fact that the artifacts are trained to suggest sensible yet varied parameters for a given, explicitly structured and therefore potentially interpretable model.

The other related camp, emerging around the variational autoencoder

(Kingma and Welling, 2014; Burda et al., 2016)

, also amortizes inference in the manner we describe, but additionally also simultaneously learns the generative model, within the structural regularization framework of a parameterized non-linear transformation of the latent variables. Approaches in this camp generally produce recognition networks that nonlinearly transform observational data at test time into parameters of a variational posterior approximation, albeit one with less conditional structure, excepting the recent work of

Johnson et al. (2016). A chief advantage of this approach is that the learned model, as opposed to the recognition network, is simultaneously regularized both towards being simple to perform inference in and towards explaining the data well.

In this work, we concern ourselves with performing inference in generative models specified as probabilistic programs while recognizing that alternative methods exist for amortizing inference while simultaneously learning model structure. Our contributions are twofold: (1) We work out ways to handle the complexities introduced when compiling inference for the class of generative models induced by universal probabilistic programming languages and establish a technique to embed neural networks in forward probabilistic programming inference methods such as sequential importance sampling (Doucet and Johansen, 2009)

. (2) We develop an adaptive neural network architecture, comprising a recurrent neural network core and embedding and proposal layers specified by the probabilistic program, that is reconfigured on-the-fly for each execution trace and trained with an infinite stream of training data sampled from the generative model. This establishes a framework combining deep neural networks and generative modeling with universal probabilistic programs (Figure 


We begin by providing background information and reviewing related work in Section 2. In Section 3 we introduce inference compilation for sequential importance sampling, the objective function, and the neural network architecture. Section 4 demonstrates our approach on two examples, mixture models and Captcha solving, followed by the discussion in Section 5.

2 Background

2.1 Probabilistic Programming

Probabilistic programs denote probabilistic generative models as programs that include sample and observe statements (Gordon et al., 2014). Both sample and observe

are functions that specify random variables in this generative model using probability distribution objects as an argument, while


, in addition, specifies the conditioning of this random variable upon a particular observed value in a second argument. These observed values induce a conditional probability distribution over the execution traces whose approximations and expected values we want to characterize by performing inference.

An execution trace of a probabilistic program is obtained by successively executing the program deterministically, except when encountering sample statements at which point a value is generated according to the specified probability distribution and appended to the execution trace. We assume the order in which the observe statements are encountered is fixed. Hence we denote the observed values by for a fixed in all possible traces.

Figure 2: Results from counting and localizing objects detected in the PASCAL VOC 2007 dataset (Everingham et al., 2010). We use the corresponding categories of object detectors (i.e., person, cat, bicycle) from the MatConvNet (Vedaldi and Lenc, 2015) implementation of the Fast R-CNN (Girshick, 2015). The detector output is processed by using a high detection threshold and summarized by representing the bounding box detector output by a single central point. Inference using a single trained neural network was able to accurately identify both the number of detected objects and their locations for all categories. MAP results from 100 particles.

Depending on the probabilistic program and the values generated at sample statements, the order in which the execution encounters sample statements as well as the number of encountered sample statements may be different from one trace to another. Therefore, given a scheme which assigns a unique address to each sample statement according to its lexical position in the probabilistic program, we represent an execution trace of a probabilistic program as a sequence


where , , and are respectively the sample value, address, and instance (call number) of the th entry in a given trace, and is a trace-dependent length. Instance values count the number of sample values obtained from the specific sample statement at address , up to time step . For each trace, a sequence holds the sampled values from the sample statements.

The joint probability density of an execution trace is


where is the probability distribution specified by the sample statement at address and is the probability distribution specified by the th observe statement. is called the prior conditional density given the sample values obtained before encountering the th sample statement. is called the likelihood density given the sample values obtained before encountering the th observe statement, where is a mapping from the index of the observe statement to the index of the last sample statement encountered before this observe statement during the execution of the program.

Inference in such models amounts to computing an approximation of and its expected values over chosen functions .

While there are many inference algorithms for universal probabilistic programming languages (Wingate et al., 2011; Ritchie et al., 2016a; Wood et al., 2014; Paige et al., 2014; Rainforth et al., 2016), we focus on algorithms in the importance sampling family in the context of which we will develop our scheme for amortized inference. This is related, but different to the approaches that adapt proposal distributions for the importance sampling family of algorithms (Gu et al., 2015; Cheng and Druzdzel, 2000).

2.2 Sequential Importance Sampling

Sequential importance sampling (SIS) (Arulampalam et al., 2002; Doucet and Johansen, 2009) is a method for performing inference over execution traces of a probabilistic program (Wood et al., 2014) whereby a weighted set of samples is used to approximate the posterior and the expectations of functions as


where is the Dirac delta function.

SIS requires designing proposal distributions corresponding to the addresses of all sample statements in the probabilistic program and their instance values . A proposal execution trace is built by executing the program as usual, except when a sample statement at address is encountered at time , a proposal sample value is sampled from the proposal distribution given the proposal sample values until that point. We obtain proposal execution traces (possibly in parallel) to which we assign weights


for with denoting the length of the th proposal execution trace.

3 Approach

We achieve inference compilation in universal probabilistic programming systems through proposal distribution adaptation, approximating in the framework of SIS. Assuming we have a set of adapted proposals such that their joint is close to , the resulting inference algorithm remains unchanged from the one described in Section 2.2, except the replacement of by .

Inference compilation amounts to minimizing a function, specifically the loss of a neural network architecture, which makes the proposal distributions good in the sense that we specify in Section 3.1. The process of generating training data for this neural network architecture from the generative model is described in Section 3.2. At the end of training, we obtain a compilation artifact comprising the neural network components—the recurrent neural network core and the embedding and proposal layers corresponding to the original model denoted by the probabilistic program—and the set of trained weights, as described in Section 3.3.

3.1 Objective Function

We use the Kullback–Leibler divergence

as our measure of closeness between and . To achieve closeness over many possible ’s, we take the expectation of this quantity under the distribution of and ignore the terms excluding in the last equality:


This objective function corresponds to the negative entropy criterion. Individual adapted proposals depend on , the output of the neural network at time step , parameterized by .

Considering the factorization


the neural network architecture must be able to map to a variable number of outputs, and incorporate sampled values in a sequential manner, concurrent with the running of the inference engine. We describe our neural network architecture in detail in Section 3.3.

3.2 Training Data

Since Eq. 7

is an expectation over the joint distribution, we can use the following noisy unbiased estimate of its gradient to minimize the objective:


Here, is the th training (probabilistic program execution) trace generated by running an unconstrained probabilistic program corresponding to the original one. This unconstrained probabilistic program is obtained by a program transformation which replaces each observe statement in the original program by sample and ignores its second argument.

Universal probabilistic programming languages support stochastic branching and can generate execution traces with a changing (and possibly unbounded) number of random choices. We must, therefore, keep track of information about the addresses and instances of the samples in the execution trace, as introduced in Eq. 1. Specifically, we generate our training data in the form of minibatches (Cotter et al., 2011) sampled from the generative model :


where is the minibatch size, and, for a given trace , the sample values, addresses, and instances are respectively denoted , , and , and the values sampled from the distributions in observe statements are denoted .

During compilation, training minibatches are generated on-the-fly from the probabilistic generative model and streamed to a stochastic gradient descent (SGD) procedure, specifically Adam

(Kingma and Ba, 2015), for optimizing the neural network weights .

Minibatches of this infinite stream of training data are discarded after each SGD update; we therefore have no notion of a finite training set and associated issues such as overfitting to a set of training data and early stopping using a validation set (Prechelt, 1998). We do sample a validation set that remains fixed during training to compute validation losses for tracking the progress of training in a less noisy way than that admitted by the training loss.

3.3 Neural Network Architecture

Our compilation artifact is a collection of neural network components and their trained weights, specialized in performing inference in the model specified by a given probabilistic program. The neural network architecture comprises a non-domain-specific recurrent neural network (RNN) core and domain-specific observation embedding and proposal layers specified by the given program. We denote the set of the combined parameters of all neural network components .

RNNs are a popular class of neural network architecture which are well-suited for sequence-to-sequence modeling (Sutskever et al., 2014) with a wide spectrum of state-of-the-art results in domains including machine translation (Bahdanau et al., 2014), video captioning (Venugopalan et al., 2014), and learning execution traces (Reed and de Freitas, 2016)

. We use RNNs in this work owing to their ability to encode dependencies over time in the hidden state. In particular, we use the long short-term memory (LSTM) architecture which helps mitigate the vanishing and exploding gradient problems of RNNs

(Hochreiter and Schmidhuber, 1997).

The overall architecture (Figure 3) is formed by combining the LSTM core with a domain-specific observe embedding layer , and several sample embedding layers and proposal layers that are distinct for each address–instance pair . As described in Section 3.2, each probabilistic program execution trace can be of different length and composed of a different sequence of addresses and instances. To handle this complexity, we define an adaptive neural network architecture that is reconfigured for each encountered trace by attaching the corresponding embedding and proposal layers to the LSTM core, creating new layers on-the-fly on the first encounter with each pair.

Evaluation starts by computing the observe embedding . This embedding is computed once per trace and repeatedly supplied as an input to the LSTM at each time step. Another alternative is to supply this embedding only once in the first time step, an approach preferred by Karpathy and Fei-Fei (2015) and Vinyals et al. (2015) to prevent overfitting (also see Section 4.2).

At each time step , the input of the LSTM is constructed as a concatenation of

  1. the observe embedding ,

  2. the embedding of the previous sample , using zero for , and

  3. the one-hot encodings of the current address

    , instance , and proposal type of the sample statement

for which the artifact will generate the parameter of the proposal distribution . The parameter is obtained via the proposal layer , mapping the LSTM output through the corresponding proposal layer. The LSTM network has the capacity to incorporate inputs in its hidden state. This allows the parametric proposal to take into account all previous samples and all observations.

During training (compilation), we supply the actual sample values to the embedding , and we are interested in the parameter in order to calculate the per-sample gradient to use in SGD.

During inference, the evaluation proceeds by requesting proposal parameters from the artifact for specific address–instance pairs as these are encountered. The value is sampled from the proposal distribution in the previous time step.

Figure 3: The neural network architecture. : observe embedding; : sample embeddings; : previous sample value; , , : one-hot encodings of current address, instance, proposal type; : LSTM input; : LSTM output; : proposal layers; : proposal parameters. Note that the LSTM core can possibly be a stack of multiple LSTMs.

The neural network artifact is implemented in Torch

(Collobert et al., 2011), and it uses a ZeroMQ-based protocol for interfacing with the Anglican probabilistic programming system (Wood et al., 2014). This setup allows distributed training (e.g., Dean et al. (2012)) and inference with GPU support across many machines, which is beyond the scope of this paper. The source code for our framework and for reproducing the experiments in this paper can be found on our project page.111

4 Experiments

We demonstrate our inference compilation framework on two examples. In our first example we demonstrate an open-universe mixture model. In our second, we demonstrate Captcha solving via probabilistic inference (Mansinghka et al., 2013).222A video of inference on real test data for both examples is available at:

Figure 4:

Typical inference results for an isotropic Gaussian mixture model with number of clusters fixed to

. Shown in all panels: kernel density estimation of the distribution over maximum a posteriori values of the means

over 50 independent runs. This figure illustrates the uncertainty in the estimate of where cluster means are for each given number of particles, or equivalently, fixed amount of computation. The top row shows that, given more computation, inference, as expected, slowly becomes less noisy in expectation. In contrast, the bottom row shows that the proposal learned and used by inference compilation produces a low-noise, highly accurate estimate given even a very small amount of computation. Effectively, the encoder learns to simultaneously localize all of the clusters highly accurately.

4.1 Mixture Models

Mixture modeling, e.g. the Gaussian mixture model (GMM) shown in Figure 5

, is about density estimation, clustering, and counting. The inference problems posed by a GMM, given a set of vector observations, are to identify how many, where, and how big the clusters are, and optionally, which data points belong to each cluster.

We investigate inference compilation for a two-dimensional GMM in which the number of clusters is unknown. Inference arises from observing the values of (Figure 5, line 9) and inferring the posterior number of clusters and the set of cluster mean and covariance parameters . We assume that the input data to this model has been translated to the origin and normalized to lie within in both dimensions.

In order to make good proposals for such inference, the neural network must be able to count, i.e., extract and represent information about how many clusters there are and, conditioned on that, to localize the clusters. Towards that end, we select a convolutional neural network as the observation embedding, whose input is a two-dimensional histogram image of binned observed data


In presenting observational data assumed to arise from a mixture model to the neural network, there are some important considerations that must be accounted for. In particular, there are symmetries in mixture models (Nishihara et al., 2013) that must be broken in order for training and inference to work. First, there are (factorial) ways to label the classes. Second, there are ways the individual data points could be permuted. Even in experiments like ours with and , this presents a major challenge for neural network training. We break the first symmetry by, at training time, sorting the clusters by the Euclidian distance of their means from the origin and relabeling all points with a permutation that labels points from the cluster nearest the original as coming from the first cluster, next closest the second, and so on. This is only approximately symmetry breaking as many different clusters may be very nearly the same distance away from the origin. Second, we avoid the symmetry by only predicting the number, means, and covariances of the clusters, not the individual cluster assignments. The net effect of the sorting is that the proposal mechanism will learn to propose the nearest cluster to the origin as it receives training data always sorted in this manner.

Figure 4, where we fix the number of clusters to , shows that we are able to learn a proposal that makes inference dramatically more efficient than sequential Monte Carlo (SMC) (Doucet and Johansen, 2009). Figure 2 shows one kind of application such an efficient inference engine can do: simultaneous object counting (Lempitsky and Zisserman, 2010)

and localization for computer vision, where we achieve counting by setting the prior

over number of clusters to be a uniform distribution over


1:procedure GaussianMixture
2:        sample number of clusters
3:       for  do
4:              sample cluster parameters        
5:       Generate data:
6:       uniform
7:       for  do
8:              sample class label
9:              sample data        
10:       return
Figure 5: Pseudo algorithm for generating Gaussian mixtures of a variable number of clusters. At test time we observe data and infer .

4.2 Captcha Solving

We also demonstrate our inference compilation framework by writing generative probabilistic models for Captchas (von Ahn et al., 2003)

and comparing our results with the literature. Captcha solving is well suited for a generative probabilistic programming approach because its latent parameterization is low-dimensional and interpretable by design. Using conventional computer vision techniques, the problem has been previously approached using segment-and-classify pipelines

(Starostenko et al., 2015; Bursztein et al., 2014; Gao et al., 2014, 2013), and state-of-the-art results have been obtained by using deep convolutional neural networks (CNNs) (Goodfellow et al., 2014; Stark et al., 2015)

, at the cost of requiring very large (in the order of millions) labeled training sets for supervised learning.

1:procedure Captcha
2:        sample number of letters
3:        sample kerning value
4:       Generate letters:
6:       for  do
7:              sample letter identity
9:       Render:
11:        sample noise parameters
13:       return
Noise: Noise: Noise:
displacement stroke ellipse
Figure 6: Pseudo algorithm and a sample trace of the Facebook Captcha generative process. Variations include sampling font styles, coordinates for letter placement, and language-model-like letter identity distributions (e.g., for meaningful Captchas). Noise parameters may or may not be a part of inference. At test time we observe image and infer .

We start by writing generative models for each of the types surveyed by Bursztein et al. (2014), namely Baidu 2011 (), Baidu 2013 (), eBay (), Yahoo (), reCaptcha (), and Wikipedia (). Figure 6 provides an overall summary of our modeling approach. The actual models include domain-specific letter dictionaries, font styles, and various types of renderer noise for matching each Captcha style. In particular, implementing the displacement fields technique of Simard et al. (2003) proved instrumental in achieving our results. Note that the parameters of stochastic renderer noise are not inferred in the example of Figure 6. Our experiments have shown that we can successfully train artifacts that also extract renderer noise parameters, but excluding these from the list of addresses for which we learn proposal distributions improves robustness when testing with data not sampled from the same model. This corresponds to the well-known technique of adding synthetic variations to training data for transformation invariance, as used by Simard et al. (2003), Varga and Bunke (2003), Jaderberg et al. (2014), and many others.

For the compilation artifacts we use a stack of two LSTMs of 512 hidden units each, an observe-embedding CNN consisting of six convolutions and two linear layers organized as [2Convolution]-MaxPooling-[3Convolution]-MaxPooling-Convolution-MaxPooling-Linear-Linear, where convolutions are 3

3 with successively 64, 64, 64, 128, 128, 128 filters, max-pooling layers are 2

2 with step size 2, and the resulting embedding vector is of length 1024. All convolutions and linear layers are followed by ReLU activation. Depending on the particular style, each artifact has approximately 20M trainable parameters. Artifacts are trained end-to-end using Adam

(Kingma and Ba, 2015) with initial learning rate

, hyperparameters

, , and minibatches of size 128.

Baidu 2011 Baidu 2013 eBay Yahoo reCaptcha Wikipedia Facebook
Our method 99.8% 99.9% 99.2% 98.4% 96.4% 93.6% 91.0%
Bursztein et al. (2014) 38.68% 55.22% 51.39% 5.33% 22.67% 28.29%
Starostenko et al. (2015) 91.5% 54.6%
Gao et al. (2014) 34% 55% 34%
Gao et al. (2013) 51% 36%
Goodfellow et al. (2014) 99.8%
Stark et al. (2015) 90%
Table 1: Captcha recognition rates.

Table 1 reports inference results with test images sampled from the model, where we achieve very high recognition rates across the board. The reported results are obtained after approximately 16M training traces. With the resulting artifacts, running inference on a test Captcha takes  100 ms, whereas durations ranging from 500 ms (Starostenko et al., 2015) to 7.95 s (Bursztein et al., 2014) have been reported with segment-and-classify approaches. We also compared our approach with the one by Mansinghka et al. (2013)

. Their method is slow since it must be run anew for each Captcha, taking in the order of minutes to solve one Captcha in our implementation of their method. The probabilistic program must also be written in a way amenable to Markov Chain Monte Carlo inference such as having auxiliary indicator random variables for rendering letters to overcome multimodality in the posterior.

We subsequently investigated how the trained models would perform on Captcha images collected from the web. We identified Wikipedia and Facebook as two major services still making use of textual Captchas, and collected and labeled test sets of 500 images each.333Facebook Captchas are collected from a page for accessing groups. Wikipedia Captchas appear on the account creation page. Initially obtaining low recognition rates ( 10%), with several iterations of model modifications (involving tuning of the prior distributions for font size and renderer noise), we were able to achieve 81% and 42% recognition rates with real Wikipedia and Facebook datasets, considerably higher than the threshold of 1% needed to deem a Captcha scheme broken (Bursztein et al., 2011). The fact that we had to tune our priors highlights the issues of model bias and “synthetic gap” (Zhang et al., 2015) when training models with synthetic data and testing with real data.444Note that the synthetic/real boundary is not always clear: for instance, we assume that the Captcha results in Goodfellow et al. (2014) closely correspond to our results with synthetic test data because the authors have access to Google’s true generative process of reCaptcha images for their synthetic training data. Stark et al. (2015) both train and test their model with synthetic data.

In our experiments we also investigated feeding the observe embeddings to the LSTM at all time steps versus only in the first time step. We empirically verified that both methods produce equivalent results, but the latter takes significantly (approx. 3 times) longer to train. This is because we are training end-to-end from scratch, and the former setup results in more frequent gradient updates for per training trace.555Both Karpathy and Fei-Fei (2015) and Vinyals et al. (2015), who feed CNN output to an RNN only once, use pretrained embedding layers.

In summary, we only need to write a probabilistic generative model that produces Captchas sufficiently similar to those that we would like to solve. Using our inference compilation framework, we get the inference neural network architecture, training data, and labels for free. If you can create instances of a Captcha, you can break it.

5 Discussion

We have explored making use of deep neural networks for amortizing the cost of inference in probabilistic programming. In particular, we transform an inference problem given in the form of a probabilistic program into a trained neural network architecture that parameterizes proposal distributions during sequential importance sampling. The amortized inference technique presented here provides a framework within which to integrate the expressiveness of universal probabilistic programming languages for generative modeling and the processing speed of deep neural networks for inference. This merger addresses several fundamental challenges associated with its constituents: fast and scalable inference on probabilistic programs, interpretability of the generative model, an infinite stream of labeled training data, and the ability to correctly represent and handle uncertainty.

Our experimental results show that, for the family of models on which we focused, the proposed neural network architecture can be successfully trained to approximate the parameters of the posterior distribution in the sample space with nonlinear regression from the observe space. There are two aspects of this architecture that we are currently working on refining. Firstly, the structure of the neural network is not wholly determined by the given probabilistic program: the invariant LSTM core maintains long-term dependencies and acts as the glue between the embedding and proposal layers that are automatically configured for the address–instance pairs in the program traces. We would like to explore architectures where there is a tight correspondence between the neural artifact and the computational graph of the probabilistic program. Secondly, domain-specific observe embeddings such as the convolutional neural network that we designed for the Captcha-solving task are hand picked from a range of fully-connected, convolutional, and recurrent architectures and trained end-to-end together with the rest of the architecture. Future work will explore automating the selection of potentially pretrained embeddings.

A limitation that comes with not learning the generative model itself—as is done by the models organized around the variational autoencoder (Kingma and Welling, 2014; Burda et al., 2016)—is the possibility of model misspecification (Shalizi et al., 2009; Gelman and Shalizi, 2013). Section 3.2 explains that our training setup is exempt from the common problem of overfitting to the training set. But as demonstrated by the fact that we needed alterations in our Captcha model priors for handling real data, we do have a risk of overfitting to the model. Therefore we need to ensure that our generative model is ideally as close as possible to the true data generation process and remember that misspecification in terms of broadness is preferable to a misspecification where we have a narrow, but uncalibrated, model.


We would like to thank Hakan Bilen for his help with the MatConvNet setup and showing us how to use his Fast R-CNN implementation and Tom Rainforth for his helpful advice. Tuan Anh Le is supported by EPSRC DTA and Google (project code DF6700) studentships. Atılım Güneş Baydin and Frank Wood are supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement FA8750-14-2-0006, Sub Award number 61160290-111668.


  • Arulampalam et al. (2002) M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174–188, 2002.
  • Bahdanau et al. (2014) D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2014.
  • Burda et al. (2016) Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In International Conference on Learning Representations (ICLR), 2016.
  • Bursztein et al. (2011) E. Bursztein, M. Martin, and J. Mitchell. Text-based CAPTCHA strengths and weaknesses. In Proceedings of the 18th ACM Conference on Computer and Communications Security, pages 125–138. ACM, 2011.
  • Bursztein et al. (2014) E. Bursztein, J. Aigrain, A. Moscicki, and J. C. Mitchell. The end is nigh: generic solving of text-based CAPTCHAs. In 8th USENIX Workshop on Offensive Technologies (WOOT 14), 2014.
  • Carpenter et al. (2015) B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: a probabilistic programming language. Journal of Statistical Software, 2015.
  • Cheng and Druzdzel (2000) J. Cheng and M. J. Druzdzel.

    Ais-bn: An adaptive importance sampling algorithm for evidential reasoning in large bayesian networks.

    Journal of Artificial Intelligence Research

    , 2000.
  • Collobert et al. (2011) R. Collobert, K. Kavukcuoglu, and C. Farabet.

    Torch7: A MATLAB-like environment for machine learning.

    In BigLearn, NIPS Workshop, EPFL-CONF-192376, 2011.
  • Cotter et al. (2011) A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient methods. In Advances in Neural Information Processing Systems, pages 1647–1655, 2011.
  • Dean et al. (2012) J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. aurelio Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le, and A. Y. Ng. Large scale distributed deep networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223–1231. Curran Associates, Inc., 2012.
  • Doucet and Johansen (2009) A. Doucet and A. M. Johansen. A tutorial on particle filtering and smoothing: Fifteen years later.

    Handbook of Nonlinear Filtering

    , 12(656–704):3, 2009.
  • Everingham et al. (2010) M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
  • Gao et al. (2013) H. Gao, W. Wang, J. Qi, X. Wang, X. Liu, and J. Yan. The robustness of hollow CAPTCHAs. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pages 1075–1086. ACM, 2013.
  • Gao et al. (2014) H. Gao, W. Wang, Y. Fan, J. Qi, and X. Liu. The robustness of “connecting characters together” CAPTCHAs. Journal of Information Science and Engineering, 30(2):347–369, 2014.
  • Gelman and Shalizi (2013) A. Gelman and C. R. Shalizi.

    Philosophy and the practice of Bayesian statistics.

    British Journal of Mathematical and Statistical Psychology, 66(1):8–38, 2013.
  • Gershman and Goodman (2014) S. J. Gershman and N. D. Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the 36th Annual Conference of the Cognitive Science Society, 2014.
  • Girshick (2015) R. Girshick. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440–1448, 2015.
  • Goodfellow et al. (2014) I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet. Multi-digit number recognition from street view imagery using deep convolutional neural networks. In International Conference on Learning Representations (ICLR), 2014.
  • Goodman et al. (2008) N. D. Goodman, V. K. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: A language for generative models. In Uncertainty in Artificial Intelligence, 2008.
  • Gordon et al. (2014) A. D. Gordon, T. A. Henzinger, A. V. Nori, and S. K. Rajamani. Probabilistic programming. In Future of Software Engineering, FOSE 2014, pages 167–181. ACM, 2014.
  • Gu et al. (2015) S. Gu, Z. Ghahramani, and R. E. Turner. Neural adaptive sequential Monte Carlo. In Advances in Neural Information Processing Systems, pages 2611–2619, 2015.
  • Hinton et al. (1995) G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The wake-sleep algorithm for unsupervised neural networks. Science, 268(5214):1158–1161, 1995.
  • Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • Jaderberg et al. (2014) M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. In Workshop on Deep Learning, NIPS, 2014.
  • Johnson et al. (2016) M. J. Johnson, D. Duvenaud, A. B. Wiltschko, S. R. Datta, and R. P. Adams. Structured VAEs: Composing probabilistic graphical models and variational autoencoders. arXiv preprint arXiv:1603.06277, 2016.
  • Karpathy and Fei-Fei (2015) A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3128–3137, 2015.
  • Kingma and Ba (2015) D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  • Kingma and Welling (2014) D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In International Conference on Learning Representations (ICLR), 2014.
  • Kulkarni et al. (2015) T. D. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. K. Mansinghka. Picture: a probabilistic programming language for scene perception. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • Lempitsky and Zisserman (2010) V. Lempitsky and A. Zisserman. Learning to count objects in images. In Advances in Neural Information Processing Systems, pages 1324–1332, 2010.
  • Lunn et al. (2000) D. J. Lunn, A. Thomas, N. Best, and D. Spiegelhalter. WinBUGS–a Bayesian modelling framework: concepts, structure, and extensibility. Statistics and Computing, 10(4):325–337, 2000.
  • Mansinghka et al. (2013) V. Mansinghka, T. D. Kulkarni, Y. N. Perov, and J. Tenenbaum. Approximate Bayesian image interpretation using generative probabilistic graphics programs. In Advances in Neural Information Processing Systems, pages 1520–1528, 2013.
  • Mansinghka et al. (2014) V. Mansinghka, D. Selsam, and Y. Perov. Venture: a higher-order probabilistic programming platform with programmable inference. arXiv preprint arXiv:1404.0099, 2014.
  • Minka et al. (2014) T. Minka, J. Winn, J. Guiver, S. Webster, Y. Zaykov, B. Yangel, A. Spengler, and J. Bronskill. Infer.NET 2.6, 2014. Microsoft Research Cambridge.
  • Nishihara et al. (2013) R. Nishihara, T. Minka, and D. Tarlow. Detecting parameter symmetries in probabilistic models. arXiv preprint arXiv:1312.5386, 2013.
  • Paige and Wood (2016) B. Paige and F. Wood. Inference networks for sequential Monte Carlo in graphical models. In Proceedings of the 33rd International Conference on Machine Learning, volume 48 of JMLR, 2016.
  • Paige et al. (2014) B. Paige, F. Wood, A. Doucet, and Y. W. Teh. Asynchronous anytime sequential Monte Carlo. In Advances in Neural Information Processing Systems, pages 3410–3418, 2014.
  • Prechelt (1998) L. Prechelt. Early stopping — but when? In Neural Networks: Tricks of the Trade, pages 55–69. Springer, 1998.
  • Rainforth et al. (2016) T. Rainforth, C. A. Naesseth, F. Lindsten, B. Paige, J.-W. van de Meent, A. Doucet, and F. Wood. Interacting particle Markov chain Monte Carlo. In Proceedings of the 33rd International Conference on Machine Learning, volume 48 of JMLR: W&CP, 2016.
  • Reed and de Freitas (2016) S. Reed and N. de Freitas. Neural programmer-interpreters. In International Conference on Learning Representations (ICLR), 2016.
  • Ritchie et al. (2016a) D. Ritchie, A. Stuhlmüller, and N. D. Goodman. C3: Lightweight incrementalized MCMC for probabilistic programs using continuations and callsite caching. In AISTATS 2016, 2016a.
  • Ritchie et al. (2016b) D. Ritchie, A. Thomas, P. Hanrahan, and N. Goodman. Neurally-guided procedural models: Amortized inference for procedural graphics programs using neural networks. In Advances In Neural Information Processing Systems, pages 622–630, 2016b.
  • Shalizi et al. (2009) C. R. Shalizi et al. Dynamics of Bayesian updating with dependent data and misspecified models. Electronic Journal of Statistics, 3:1039–1074, 2009.
  • Simard et al. (2003) P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition – Volume 2, ICDAR ’03, pages 958–962, Washington, DC, 2003. IEEE Computer Society.
  • Stark et al. (2015) F. Stark, C. Hazırbaş, R. Triebel, and D. Cremers. Captcha recognition with active deep learning. In GCPR Workshop on New Challenges in Neural Computation, Aachen, Germany, 2015.
  • Starostenko et al. (2015) O. Starostenko, C. Cruz-Perez, F. Uceda-Ponga, and V. Alarcon-Aquino. Breaking text-based CAPTCHAs with variable word and character orientation. Pattern Recognition, 48(4):1101–1112, 2015.
  • Sutskever et al. (2014) I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.
  • Varga and Bunke (2003) T. Varga and H. Bunke. Generation of synthetic training data for an hmm-based handwriting recognition system. In Seventh International Conference on Document Analysis and Recognition, 2003, pages 618–622. IEEE, 2003.
  • Vedaldi and Lenc (2015) A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for MATLAB. In Proceeding of the ACM International Conference on Multimedia, 2015.
  • Venugopalan et al. (2014) S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729, 2014.
  • Vinyals et al. (2015) O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015.
  • von Ahn et al. (2003) L. von Ahn, M. Blum, N. J. Hopper, and J. Langford. CAPTCHA: Using hard AI problems for security. In International Conference on the Theory and Applications of Cryptographic Techniques, pages 294–311. Springer, 2003.
  • Wingate et al. (2011) D. Wingate, A. Stuhlmüller, and N. Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pages 770–778, 2011.
  • Wood et al. (2014) F. Wood, J. W. van de Meent, and V. Mansinghka. A new approach to probabilistic programming inference. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, pages 1024–1032, 2014.
  • Zhang et al. (2015) X. Zhang, Y. Fu, S. Jiang, L. Sigal, and G. Agam. Learning from synthetic data using a stacked multichannel autoencoder. In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pages 461–464, Dec 2015. doi: 10.1109/ICMLA.2015.199.