1 Introduction
In the context of Turingcomplete, higherorder probabilistic programming languages [1, 2, 3], a probabilistic program is simultaneously a generative model and a procedure for sampling from the same. All probabilistic programming procedures are program text that describe how to generate a sample value conditioned on the value of arguments. A probabilistic programming procedure is a constructivist description of a conditional distribution. Deterministic procedures merely encode particularly simple, degenerate conditional distributions.
Higherorder probabilistic programming languages open up the possibility of doing inference over generative model program text directly via a generative prior over program text and the higherorder functionality of eval. This paper is a first step towards the ambitious goal of inferring generative model program text directly from example data. Inference in the space of program text is hard so, as a start, we present an account of our effort to directly infer sampler program text that, when evaluated repeatedly, produces samples with similar summary statistics to observational data.
There are reasons to make this specific effort itself. One is the potential automation of the development of new entries in the special collection of efficient sampling procedures that humankind has painstakingly developed over many decades for common distributions, for example the Marsaglia [4] and BoxMuller [5]
samplers for the normal distribution (see
[6] for others). In this paper we develop preliminary evidence that suggests that such automated discovery might indeed be possible. In particular we perform successful leaveoneout experiments in which we are able to learn a sampling procedure for one distribution, i.e. Bernoulli, given only program text for others and observed samples. We do this by imposing a hierarchical generative model of sampling procedure text, fitting it to outofsample, humanwritten sampler program text, then inferring the program text for the leftout random variate distribution type given only sample values drawn from the same.The second reason for making such an effort has to do with “compiling” probabilistic programs. What we mean by compilation of probabilistic programs is somewhat more broad than both transformational compilation [7] which compiles a probabilistic program into an MH sampler for the same and normal compilation of a probabilistic program to machine code that encodes a parallel forward inference algorithm [8]. What we mean by probabilistic program compilation is the automatic generation of program text that when run will generate samples distributed ideally identically to the posterior distribution of quantities of interest in the original program, conditioned on the observed data. Concisely; given samples resulting from posterior inference in a probabilistic program, our aim is to learn program text that when evaluated generate samples from the same directly. The reason for expressing and approaching compilation in this generality is that simpler approaches to generalizing probabilistic programming posterior samples via a lessexpressive model families will suffer precisely due to the compromise in expressivity. Distributions over expressions are valid posterior marginals in higherorder probabilistic programming languages. Compiled probabilistic programs must be capable of generating the same. This effort is also a first step towards such a compiler.
2 Related Work
Our approach to learning probabilistic programs relates to both program induction and statistical generalization from sampled observations. The former is usually treated as search in the space of program text where the objective is to find a deterministic function that exactly matches outputs given parameters. The latter, generalizing from data, is usually referred to as either density estimation or learning.
2.1 Automatic programming
An extensive review and introduction is given in [9] and its references. Modern examples of functional inductive programming include IGOR2 [10] in which search is utilized to find programs that match constraints specified by equations and MagicHaskeller [11]
which uses traditional search and brute force enumeration to find programs that obey constraints specified in terms of parameter/output pairs. Similar search procedures are used to find constraint satisfying hypotheses in inductive logic programming
[12, 13] and probabilistic variants thereof [14, 15, 16]. Alternative search techniques such as genetic programming have also been used to find constraint satisfying programs, and in some of this work it has been suggested that search is easier in the space of functional programming languages than in imperative
[17].This insight supports an interesting choice made by Liang et al. [18] in searching over the space of combinatory logic rather than lambda calculus expressions. Our work is framed similarly to theirs (and the theoretical suppositions in [19]
) in that we impose a prior on the program text and use Bayesian inference machinery to infer a distribution over program text given observations. Unlike
[18] we learn stochastic programs from sampled observation data rather than deterministic programs from input/output pairs.2.2 Generalizing from Data and Automated Modelling
Generalizing from data is one of the main objectives of the fields of machine learning and statistics. It is important to note that what we are doing here is a substantial departure from almost all prior art in these fields in the sense that the learned representation of the observed data is that of generative sampling program text rather than, say, a parametric or nonparametric model from which samples can be drawn using some extrinsic algorithm. In our work the model is the sampler itself and it is represented as program code.
The greedy search over generative models structures in [20] and kernel compositions in [21]
are both related to our work in the sense that they search over a highly expressive generalization class in an unsupervised manner for models that explain observational data well. In contrast we do full Bayesian inference, not greedy search, and the model family over which we search is ultimately more expressive as it is a high order language with stochastic primitives and, as a result, is capable of representing all computable probability distributions
[2].Our work relies heavily on a Turingcomplete higherorder probabilistic programming language and system called Anglican [1], which borrows some of its modelling language syntax and semantics from Venture [3] and generally inherits principles from Church [2]. What differentiates Anglican most substantially from the others is that it introduced and uses particle Markov Chain Monte Carlo [22] for probabilistic programming inference. That we use Anglican means that we use PMCMC and MetropolisHastings algorithm for inference.
3 Approach
Our approach can be described in terms of a Markov Chain Monte Carlo (MCMC) approximate Bayesian computation (ABC) [23] targeting
(1) 
where at a high level is a distance between summary statistics computed between observed data and data, , generated by interpreting latent sampler program text .
Consider first a single given data generating distribution
with parameter vector
. Let be a set of samples from . Consider the task of learning program text that when repeatedly interpreted returns samples whose distribution is close to . Let , be a set of samples generated by repeatedly interpreting times..Let be a summary function of a set of samples and let be an unnormalized distribution function that returns high probability when . We refer to as a penalty, distance, or compatibility function interchangeably.
We use probabilistic programming to write and perform inference in such a model, i.e. to generate samples of from the marginal of (1) and generalizations to come of the same. The particular system we employ [1] uses PMCMC and MH for inference. Refer to the probabilistic program code in Figure 1 where the first line establishes a correspondence between and the variable programtext then samples it from where productions is an adaptorgrammarlike [24] prior on program text that is described in Section 3.1. In this particular example is implicitly specified since the learning goal here is to find a sampler for the standard normal distribution. Also corresponds to the program variable samples and, here, . Here and are computed on the last four lines of the program with being implicitly defined as returning a four dimensional vector consisting of the estimated mean, variance, skewness, and kurtosis of the set of samples drawn from . The distance function is also implicitly defined to be a multivariate normal with mean and diagonal covariance
. Note that this means that we are seeking sampler text whose output is distributed with mean 0, variance 1, skew 0, and kurtosis 0 and we penalize deviations from that by a squared exponential loss function with bandwidth
, named noiselevel in the code .This example highlights an important generalization of the original description of our approach. For the standard normal example we chose a form of such that we can compute the first summary statistic of analytically. There are at least three kinds of scenarios in which can be computed in different ways. The first occurs when we search for efficient code for sampling from known distributions. In many such cases, as in the standard normal case just described, the summary statistics of can be computed analytically. The second happens when we can only sample from . This corresponds to a situations when, for instance, there is a running, computationally expensive MCMC sampler that can be asked to produce additional samples. This is how we frame compilation of probabilistic programs. The third (how we originally described our approach) is the fixed dataset cardinality setting and corresponds to the setting of learning program text generative model for arbitrary observed data.
[assume boxmullernormal
(lambda (mean std)
(+ mean (* std
(* (cos (* 2 (* 3.14159
(uniformcontinuous 0.0 1.0))))
(sqrt (* 2
(log (uniformcontinuous 0.0 1.0))))))))]

[assume poisson (lambda (rate)
(begin (define L (exp (* 1 rate)))
(define innerloop (lambda (k p)
(if (< p L) (dec k)
(begin (define u (uniformcontinuous 0 1))
(innerloop (inc k) (* p u))))))
(innerloop 1 (uniformcontinuous 0 1))))]

Figure 2 illustrates the another important generalization of the formulation in (1). When learning a standard normal sampler we did not have to take into account parameter values. Interesting sampler program text is endowed with arguments, allowing it to generate samples from an entire family of parameterised distributions. Consider the well known BoxMuller algorithm shown in Figure 3
. It is parameterized by mean and standard deviation parameters. For this reason we will refer to it and others like it as a conditional distribution samplers. Learning conditional distribution sampler program text requires recasting our MCMCABC target slightly to include the parameter
of the distribution :(2) 
Here in order to proceed we must begin to make approximating assumptions. This is because in our case we need
to be truly improper as our learned sampler program text should work for all possible input arguments and not simply a just a high prior probability subset of values. Assuming that program text that works for a few settings of input parameters is fairly likely to generalize well to other parameter settings we approximately marginalize our MCMCABC target (
2) by choosing a small finite of parameters yielding our approximate marginalized MCMCABC target:(3) 
The probabilistic program for learning conditional sampler program text for in Figure 2 shows an example of this kind of approximation. It samples from
times, accumulating summary statistic penalties for each invocation. In this case each individual summary distance computation involves computing both a Gtest statistic
where is the number of samples in that take value
and its corresponding pvalue under the null hypothesis that
are samples from . Since the Gtest statistic is approximately distributed, i.e. , we can construct in this case by computing the probability of falsely rejecting the null hypothesis . Falsely rejecting a null hypothesis is equivalent to flipping a coin with probability given by the pvalue of the test and having it turn up heads. These are the summary statistic penalties accumulated in the observe lines in Figure 2.As an aside, in the probabilistic programming compilation context could be all of the observe’d data in the original program. By this parameterising compilation links our approach to that of [26].
3.1 Grammar and production rules
As we have the expressive power of a higherorder probabilistic programming language at our disposal, our prior over conditional distribution sampler program text is quite expressive. At a high level it is similar to the adaptor grammar [24] prior used in [18] but diverges in details, particularly those having to do with creation of local environments and the conditioning of subexpression choices on type signatures. In pseudocode our prior can be expressed as follows:

[leftmargin=*]

a random constant with the type . Constants with types integer, real, etc. are sampled from a Chinese restaurant process (CRP) representation of a marginalized discrete Dirichlet process prior pair with for each type, where the base distribution is itself a mixture distribution. For example for type real we use a mixture of (normal 0 10), (uniformcontinuous 100 100), and uniform over common constants like .

, where is a primitive or stochastic procedure in the global environment with output type signature sampled randomly.

, where is a compound procedure sampled from a CRP representation of a marginalized discrete Dirichlet process prior pair with . The base distribution generates compound procedures with return type
, Poisson distributed argument count, and random parameter types. The body of the compound procedure is generated using the same production rules given an environment that incorporates argument input variable names and values.

, where is an extended environment with the variable named and its value added).

.

, i.e. recursive call to the current compound procedure.
To avoid numerical errors while interpreting generated programs we replace functions like log(a) with safelog(a), which returns if , and uniformcontinuous with safeuc(a, b) which swaps arguments if and returns if .
The set of types we used for our experiments was {real, bool} and the general set of procedures in the global environment included +, , *, safediv, safeuc.
3.2 Production rule probabilities
While it is possible to manually specify production rule probabilities for the grammar in Section 3.1 we took a hierarchical Bayesian approach instead, learning from humanwritten sampler source code. To do this we translated existing implementations of common onedimensional statistical distribution samplers [6] into Anglican source. Examples are provided in Figure 3. Conveniently all of them require only one stochastic procedure uniformcontinuous so we also only include that single stochastic procedure in our grammar.
We compute heldout production rules prior probabilities from this corpus in crossvalidation way so that when we are inferring a probabilistic program to sample from we update our priors using counts from all other sampling code in the corpus, specifically excluding the sampler we are attempting to learn. Our production rule probability estimates are smoothed by Dirichlet priors. Note that in the following experiments (Sections 4.3 and 4.4) the production rule priors were updated then fixed during inference. True hierarchical coupling and joint inferences approaches are straightforward from a probabilistic programming perspective [27] but result in inference runs that take longer to compute.
4 Experiments
The experiments we perform illustrate all three uses cases outlined for automatically learning probabilistic programs. We begin by illustrating the expressiveness of our prior over sampler program text in Section 4.1. We then report results from experiments in which we test our approach in all three scenarios for how we can compute the ABC penalty . The first set of experiments in Section 4.2) tests our ability to learn probabilistic programs that produce samples from known onedimensional probability distributions. In these experiments either probabilistically conditions on
values of onesample statistical hypothesis tests or on approximate moment matching. The second set of experiments in Section
4.3) addresses the cases where only a finite number of samples from an unknown realworld source are provided. The final experiment in Section 4.4) is a preliminary study in probabilistic program compilation where it is possible to gather a continuing set of samples.4.1 Samples from sampled probabilistic programs
To illustrate the flexibility of our prior, specifically the production rules we employ, we show samples generated by probabilistic programs sampled from the prior in Section 3.1. In Figure 4 we show six histograms of samples from six sampled probabilistic programs from our prior over probabilistic programs. Such randomly generated samplers constructively define considerably different distributions. Note in particular the variability of the domain, variance, and even number of modes.
(lambda (par stackid) (* (begin (define sym0 0.0)
(exp (safeuc 1.0 (safesqrt (safeuc
(safediv (safeuc 0.0 (safeuc 0.0 3.14159))
par) (+ 1.0 (safeuc (begin (define sym2
(lambda (var1 var2 stackid) (dec var2)))
(sym2 (safeuc 2.0 (* (safeuc 0.0 (begin
(define sym4 (safeuc sym0 (* (+ (begin
(define sym5 (lambda (var1 var2 stackid)
(safediv (+ (safelog (dec 0.0)) 1.0) var1)))
(sym5 (exp par) 1.0 0)) 1.0) 1.0))) (if (< (safeuc
par sym4) 1.0) sym0 (safeuc 0.0 1.0)))) sym0))
(safediv sym0 (exp 1.0)) 0)) 0.0))))))) par))

(lambda (stackid)
(* 2.0 (* (*
(* 1.0 (safeuc 0.0 2.0))
(safeuc (safeuc 4.0
(+ (safelog 2.0) 1.0))
(* (safediv 2.0
55.61617747203855)
(if (< (safeuc
(safeuc
27.396810474207317
(safeuc 1.0 2.0)) 2.0) 2.0)
4.0 1.0)))) 1.0)))

4.2 Learning sampler code for common onedimensional distributions
Source code exists for efficiently sampling from many if not all common onedimensional distributions. We conducted experiments to test our ability to automatically discover such sampling procedures and found encouraging results.
In particular we performed a set of leaveoneout styles experiments to infer sampler program text for six common onedimensional distributions: , , , , , . For each distribution we performed MCMCABC inference with approximately marginalizing over the parameter space using a small random set of parameters and conditioning on statistical hypothesis tests or on moment matching as appropriate. Note that the pretraining of the hierarchical program text prior was never given the text of the sampler for the distribution being learned.
Representative histograms of samples from the best posterior program text sample discovered in terms of summary statistics match are shown in Figure 5. A pleasing result is the discovery of the exact distribution sampler program, the text of which is shown in Figure 7. Figure 6 shows the inferred sampler text for . How to fully characterize the divergence between learned sampling algorithms and the true distribution via a mechanisms other than exhaustive computation and hypothesis testing remains an open question.
4.3 Generalizing arbitrary data distributions
We also explored using our approach to learn generative models in the form of sampler program text for real world data of unknown distribution. We arbitrarily chose three continuous indicator features from a credit approval dataset [28, 29] and inferred sampler program text using twosample KolmogorovSmirnov distribution equality tests (vs. the empirical data distribution) analogously to Gtest described before. Histograms of samples from the best inferred sampler program text versus the training empirical distributions are shown in Figure 8. An example inferred program is shown in Figure 6 (right)
. The data distribution representation, despite being expressed in the form of sampler program text, matches salient characteristics of the empirical distribution well.
4.4 Compilation of probabilistic programs
MCMC sampling, particularly in a Bayesian context, is usually quite costly and further, requires large amounts of storage to represent the learned distribution as samples. Learning an representation of the posterior in terms of a sampling procedure that directly samples approximately from the posterior distribution of interest could potentially improve both, particularly for the purposes of repeated posterior predictive inference. In probabilistic programming where samplebased posterior representations are the only option the problem is particularly acute. Further, higherorder probabilistic programming languages require that the expressivity class of the approximating distribution is at least the same as that of another probabilistic program. While the ultimate aim of compilation of probabilistic program inference to a learned program for sampling directly from the posterior of interest remains quite far off, our preliminary experiments are encouraging.
To explore this possibility we took an uncollapsed BetaBinomial model with prior distribution , and used Metropolis Hastiings to infer a sampledbased representation of the posterior distribution over given four successful trials from . The correspondent probabilistic program is given in Figure 9. Then we used our approach to learn a probabilistic program that when repeatedly invoked produces samples statistically match the empirical posterior distribution. Examples of inferred probabilistic procedures are given in Figure 9 (right). The analytical posterior distribution in this case is to which we found good approximations. Note that in the probabilistic program compilation experiment additional primitives include beta, normal, and other higher order stochastic procedures were added to the program text generative model.
[assume theta (beta 1.0 1.0)]
[observe (flip theta) True]
[observe (flip theta) True]
[observe (flip theta) True]
[observe (flip theta) True]
[predict theta]

[assume theta (safebeta 4.440 1.0)]
[assume theta (safesqrt (safebeta (safelog 11.602) 1.0))]
[assume theta (safebeta (safesqrt 27.810) 1.0)]
[assume theta (beta 5.0 1.0)]
[predict theta]

5 Discussion
Our novel approach to program synthesis via probabilistic programming raises at least as many questions as it answers. One key high level question this kind of work sharpens is, really, what is the goal of program synthesis? By framing program synthesis as a probabilistic inference problem we are implicitly naming our goal to be that of estimating a distribution over programs that obey some constraints rather than as a search for a single best program that does the same. On one hand, the notion of regularising via a generative model is natural as doing so predisposes inference towards discovery of programs that preferentially possess characteristics of interest (length, readability, etc.). On the other hand, exhaustive computational inversion of a generative model that includes evaluation of program text will clearly remain intractable for the foreseeable future. For this reason greedy and stochastic search inference strategies are basically the only options available. We employ the latter, and MCMC in particular, to explore the posterior distribution of programs whose outputs match constraints knowing fullwell that its actual effect in this problem domain, and, in particular finite time, is moreorless that of stochastic search. We could add an annealing temperature and schedule [30] to clarify our use of MCMC as search, however, while ergodic, our system is sufficiently stiff to not require quenching (and as a result almost certainly will not achieve maxima in general).
It is pleasantly surprising, however, that the Monte Carlo techniques we use were able to find exemplar programs in the posterior distribution that actually do a good job of generalising observed data in the experiments we report. It remains an open question whether or not sampling procedures are the best stochastic search technique to use for this problem in general however. Perhaps by directly framing the problem as one of search we might do better, particularly if our goal is a single best program. Techniques ranging from genetic algorithms
[31] to Monte Carlo tree search [32] all show promise and bear consideration.One interesting way to take this work forward is to introduce techniques from the cumulative/incremental learning community [33], perhaps by adding timedependent and hierarchical dimensions to the program text generative model. In the specific context of learning sampler program text, it would be convenient if, for instance when learning the program text for sampling from a parameterised normal distribution, one had access to an already learned subroutine for sampling from a standard normal. In related work from the field of inductive programming large gains in performance were observed when the learning task was structured in this way [34].
Our example inference tasks are just the start. What inspired and continues to inspire us is our the internal experience of our own ability to reason about procedure. Given examples, humans clearly are able to generate program text for procedures that compute or otherwise match examples. Humans can physically simulate Turing machines, and, it would seem clear, are capable doing something at least as powerful when deducing the action of a particular piece of program text from the text itself. No candidate artificial intelligence solution will be complete without the inclusion of such ability. Those without will always be deficient in the sense that it is apparent that humans can internally represent and reason about procedure. Perhaps some generalised representation of procedure is the actual expressivity class of human reasoning. It certainly can’t be less.
Acknowledgments
The authors thank many people for their help, wholesome discussions, suggestions, and comments, including Brooks Paige, JanWillem van de Meent, David Tolpin and Tejas Kulkarni. This work was supported by Xerox faculty research award and Somerville college scholarship. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the authors and do not necessarily reflect the views of any of the above sponsors.
This material is based on research sponsored by DARPA through the U.S. Air Force Research Laboratory under Cooperative Agreement number FA87501420004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation heron. The views and conclusions contained herein are those of the authors and should be not interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, the U.S. Air Force Research Laboratory of the U.S. Government.
References
References
 Wood et al. [2014] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. 2014.
 Goodman et al. [2008] Noah D. Goodman, Vikash K. Mansinghka, Daniel M. Roy, Keith Bonawitz, and Joshua B. Tenenbaum. Church: A language for generative models. pages 220–229, 2008.
 Mansinghka et al. [2014] Vikash Mansinghka, Daniel Selsam, and Yura Perov. Venture: a higherorder probabilistic programming platform with programmable inference. arXiv preprint arXiv:1404.0099, 2014.
 Marsaglia and Bray [1964] George Marsaglia and Thomas A Bray. A convenient method for generating normal variables. 6(3):260–264, 1964.
 Box et al. [1958] George EP Box, Mervin E Muller, et al. A note on the generation of random normal deviates. 29(2):610–611, 1958.
 Devroye [1986] Luc Devroye. Nonuniform random variate generation, 1986.
 Wingate et al. [2011] David Wingate, Andreas Stuhlmueller, and Noah D Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. page 131, 2011.
 Paige and Wood [2014] Brooks Paige and Frank Wood. A compilation target for probabilistic programming languages. 2014.
 Gulwani et al. [2014] Sumit Gulwani, Emanuel Kitzelmann, and Ute Schmid. Approaches and Applications of Inductive Programming (Dagstuhl Seminar 13502). 3(12):43–66, 2014. ISSN 21925283. doi: http://dx.doi.org/10.4230/DagRep.3.12.43. URL http://drops.dagstuhl.de/opus/volltexte/2014/4507.
 Kitzelmann [2009] Emanuel Kitzelmann. Analytical inductive functional programming. pages 87–102. Springer, 2009.
 Katayama [2011] Susumu Katayama. Magichaskeller: System demonstration. page 63, 2011.
 Muggleton and Feng [1992] Stephen Muggleton and Cao Feng. Efficient induction of logic programs. 38:281–298, 1992.
 Muggleton and De Raedt [1994] Stephen Muggleton and Luc De Raedt. Inductive logic programming: Theory and methods. 19:629–679, 1994.
 Muggleton [1996] Stephen Muggleton. Stochastic logic programs. 32:254–264, 1996.
 Kersting [2005] Kristian Kersting. An inductive logic programming approach to statistical relational learning. pages 1–228. IOS Press, 2005. URL http://people.csail.mit.edu/kersting/FAIAilpsrl/.
 De Raedt and Kersting [2008] Luc De Raedt and Kristian Kersting. Probabilistic inductive logic programming. Springer, 2008.
 Briggs and O’neill [2006] Forrest Briggs and Melissa O’neill. Functional genetic programming with combinators. pages 110–127, 2006.
 Liang et al. [2010] Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. pages 639–646, 2010.
 Mansinghka [2009] Vikash Kumar Mansinghka. Natively probabilistic computation. PhD thesis, Massachusetts Institute of Technology, 2009.
 Grosse et al. [2012] Roger Grosse, Ruslan R Salakhutdinov, William T Freeman, and Joshua B Tenenbaum. Exploiting compositionality to explore a large space of model structures. 2012.
 Duvenaud et al. [2013] David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B Tenenbaum, and Zoubin Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. 2013.
 Andrieu et al. [2010] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010.
 Marjoram et al. [2003] Paul Marjoram, John Molitor, Vincent Plagnol, and Simon Tavaré. Markov chain monte carlo without likelihoods. 100(26):15324–15328, 2003.
 Johnson et al. [2007] Mark Johnson, Thomas L Griffiths, and Sharon Goldwater. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. 19:641, 2007.
 Knuth [1998] Donald E Knuth. The art of computer programming, 3rd edn., vol. 2. 1998.

Hwang et al. [2011]
Irvin Hwang, Andreas Stuhlmüller, and Noah D Goodman.
Inducing probabilistic programs by Bayesian program merging.
2011.  Maddison and Tarlow [2014] Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. 2014.

Quinlan [1987]
J. Ross Quinlan.
Simplifying decision trees.
27(3):221–234, 1987.  Bache and Lichman [2013] K. Bache and M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
 Van Laarhoven and Aarts [1987] Peter JM Van Laarhoven and Emile HL Aarts. Simulated annealing. Springer, 1987.
 Poli et al. [2008] Riccardo Poli, William B Langdon, Nicholas F McPhee, and John R Koza. A field guide to genetic programming. Lulu. com, 2008.
 Browne et al. [2012] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. 4(1):1–43, 2012.
 Dechter et al. [2013] Eyal Dechter, Jon Malmaud, Ryan P Adams, and Joshua B Tenenbaum. Bootstrap learning via modular concept discovery. pages 1302–1309. AAAI Press, 2013.
 Henderson [2010] Robert Henderson. Incremental learning in inductive programming. pages 74–92. Springer, 2010.
Comments
There are no comments yet.