For computers to program computers, we must first address how programming problems will be represented and how performance will be evaluated. In the field of program synthesis, the two main approaches for specifying problems are: (a) by examples [Gulwani, Harris, and Singh2012], in which a number of example input-output pairs are provided as input and the goal is to output a function satisfying while possibly minimizing other criteria (e.g., being short); and (b) by specification [Manna and Waldinger1980], in which a formal specification in some particular language is given. More generally, there is a utility (usefulness score) for any synthesized program. Assuming this utility function can be written as a program-scoring program, we propose the approach of giving the synthesis direct access to the scoring-program’s source code: (c) program synthesis as optimizing the glass-box111Ironically, the term white box is commonly used to indicate transparency even though white boxes are not necessarily transparent. Hence, we use glass box. program scoring objective that will be used to evaluate it. In this paper, we illustrate the potential of the glass-box program synthesis approach by designing a system that learns to synthesize programs that maximize the corresponding utilities of various glass-box objectives.
[frame=single, fontsize=] def score(x: string, y: char): return x.count(y) sum([score(x, f(x)) for x in tests()])
Glass-box program synthesis.
To better understand the glass-box representation, consider a program synthesis contest. In programming contests among humans, problems are often described in English and scored automatically by automated scoring programs based on their output on certain inputs (and other factors such as runtime and time of submission). Certainly, it is difficult for computers to understand English descriptions, so instead, we propose describing the problem to the synthesis system through the source code of the scoring program. In this “glass-box” model, there is no need for a separate problem description or examples – just the scoring program. As illustrated in Figure 1, if the problem was to find the most frequent character in a string, the score of a program outputting a character on a string would be the number of occurrences of in and the total score would be the sum of its scores over some randomly generated strings. Note that to specify the problem by input-output examples (Figure 1 top), one needs to solve the problem on several examples, and there may be ambiguities in that there may be multiple different functions mapping to .
In casting the problem of programming as optimizing a glass-box program-scoring program, one must write a program that precisely defines the score/utility of the synthesized program. However, this is arguably a necessary step to posing a problem in general, not only for programming contests. The term glass-box contrasts with black-box access. Black-box access would mean the ability to score arbitrary programs without any other access to the scoring program. Glass-box access is of course at least as powerful as black-box access because one can run the scoring program itself. A few cases where glass-box program synthesis can be applied are:
Traditional optimization problems such as linear programming or the Traveling Salesman Problem (where the objective is the length of the tour).
Number theory problems such as Greatest Common Divisor (GCD) or factoring. These problems can be efficiently scored because it is easy to verify factors and primality.
Programming by example (PBE). In this case, a program is scored by its accuracy mapping fixed inputs to outputs , combined with a regularization term, e.g., , to prevent over-fitting.
Optimizing an algorithm’s performance in simulation. An example would be designing a network protocol to be evaluated in a network simulator. In this case, the scoring program could measure performance in a large network.
Meta-optimization, learning to learn to learn. The problem of synthesizing program synthesis can itself be posed in the form of a meta-scorer that generates problems (i.e., scorers) from various domains, runs the candidate synthesizer on these problems, and averages the resulting program scores.
Learning to synthesize solutions by synthesizing problems.
After defining the representation and evaluation of a programming problem, we demonstrate the feasibility of this approach through a system that learns to synthesize programs. Just as athletes do various exercises to improve performance at a sport, a program synthesis system may improve by practicing and learning from synthesizing solutions to various problems. Menon et al. (menon2013machine) introduced a Machine Learning (ML) approach to PBE synthesis that learns to synthesize across problems. Because their repository of real-world problems was relatively small, they selected a small number of features suited to text-processing PBE.
To get around this shortage of data, we generate our own problems which we then use to practice synthesizing solutions. In particular, once we have a set of practice problems, we iteratively find improved solutions to these problems by interleaving search and training a logistic regression-based model which guides the search intelligently, following Menon et al. (menon2013machine). Recall that in glass-box synthesis, a problem is a scoring function, so we immediately know how to score any synthesized program. A similar approach of creating artificial problems was introduced independently by Balog et al. (balog2016deepcoder). Two key differences between this work and ours is that they perform PBE synthesis while we perform glass-box synthesis, and that they utilize deep learning, while we make use of multi-class logistic regression.
We show that in practice, it is possible to synthesize solutions for a number of problems of interest, such as the GCD, that would be prohibitively slow with a naive approach. Also, we show that our Glass-Box Program Synthesis system (GlassPS), can improve itself even in a domain agnostic framework, where a union of grammars from various domains is considered; although better results can be achieved with domain-specific grammars.
In this paper, we formalize learning to synthesize successful solutions to programming problems as a machine learning problem, using glass-box optimization. Our main contributions are three-fold:
We introduce glass-box program-scoring programs as a novel alternative for specifying problems.
We formalize a machine learning framework for glass-box optimization by synthesizing practice problems and learning patterns among the problems and the as of now discovered solutions.
We present experiments that demonstrate the ability of our framework to learn to generate well-formed python programs across domains.
The rest of this paper is organized as follows. After discussing related work, we introduce key concepts needed to understand our approach, and then present the details of our proposed learning to write programs framework GlassPS. Then, we present experimental results that measure the empirical performance of GlassPS in a range of problem domains.
Learning to write computer programs has recently received a lot of attention from multiple viewpoints. Here we mention only a few notable related works due to space constraints.
In Programming by Example (PBE), a program synthesis system attempts to infer a program from input/output (I/O) example pairs, searching for a composition of some base functions. PBE has had success in various domains [Gulwani2012], one notable example being "Flash Fill" for string manipulation in Microsoft Excel [Gulwani, Harris, and Singh2012]. In (Raza et al. raza2015compositional) the end-user can give both I/O pairs and a natural language description of the task.
Recent advances in deep learning and the augmentation of deep networks with end-to-end trainable abstractions [Graves et al.2016]
, such as Neural Turing Machine (Graves et al. graves2014neural), Hierarchical Attentive Memory[Andrychowicz and Kurach2016] and Neural Stack [Joulin and Mikolov2015], have given rise to the neural programming paradigm (Neelakantan et al. neelakantan2015neural; Zaremba and Sutskever zaremba2014learning). Most of these works are trained using I/O examples, except for (Reed and de Freitas reed2015neural; Cai et al. cai2017making) where the model (neural programmer interpreter) is trained with rich supervision execution traces.
The works of [Menon et al.2013, Yessenov et al.2013, Balog et al.2016, Dechter et al.2013, Devlin et al.2017, Parisotto et al.2016] combine the learning-to-program approaches of machine learning and program synthesis to perform guided search over the space of synthesized programs.
Other successful views have been the probabilistic programming perspective, i.e., representing a program as a generative probabilistic model (Lake at al. lake2015human), and the programming via specification approach [Solar-Lezama2008, Gaunt et al.2016b], i.e., specifying a partial program ‘sketch’ capturing the high-level structure of the implementation and letting the computer synthesize the low-level details.
Relationship to Other Works.
Our work, using the perspective of machine learned program synthesis, introduces the novel glass-box introspection along with contextual features to inform the search. The two closest works to ours are [Parisotto et al.2016] and [Balog et al.2016]. The key differences to our work are: (i) they use I/O examples to condition the search, while we propose and use the glass-box problem representation, (ii) they use deep networks, while we use logistic regression. Also, while in our work and [Parisotto et al.2016], problem-specific learned weights and a partial program representation guide the search, in [Balog et al.2016] a separate model has to be learned per task.
Another factor differentiating the various works is the expressiveness of the Domain Specific Language used. While many learning-to-program works demonstrate program-writing for single domains like string processing, our approach can generate code in a general-purpose programming language covering various domains such as number theory, strings, root finding; which can open up interesting possibilities for general problem solving (Mikolov et al. mikolov2015roadmap).
Similar to [Balog et al.2016], in our approach we utilize the thus far found problem-solution pairs to inform the continuous learning of our system; hence, our work can be put in the context of lifelong learning [Gaunt et al.2016a].
Having said this, our goal in this paper is not to compete with existing PBE systems or Neural Programmer Interpreter ones. Instead, we wish to provide glass-box representation as an alternative tool to guide the program generation.
To formalize the problem of learning to synthesize programs as solutions to glass-box optimization problems, we start with a discussion of high-level concepts:
Program. A program computes a function where is a set of inputs and is a set of outputs.
We distinguish the program from the function it computes because two different programs may compute the same function. In GlassPS, program input is the set of python objects (including numbers, strings, arrays, and functions) and program output is the set of objects plus the special symbol that indicates that the program crashed or did not produce an output in the allotted time.
Glass-box problems. Glass-box synthesis is defined over a set of problems. Each problem is represented by a glass-box scoring program . A problem computes a function , which measures the score of the solution to the problem , i.e., .
Synthesizer. A synthesizer generates a solution program based on the program with which it will be scored. Hence, the goal of a synthesizer is to attempt to maximize score:
Importantly, takes the scoring-program ’s source code as glass-box input. Though this can be used to simulate black-box access to by generating I/O examples, the synthesizer can potentially achieve higher scores than with black-box access alone.
Solutions. We use to denote the set of programs output by the synthesizer, which we refer to as solutions. Hence, note that both problems and solutions are programs.
Grammars. A program can be expressed as a composition of building blocks; these blocks constitute the rules of a context-free grammar (CFG) that allows for recursive and compositional structure. We denote the set of grammar rules for solution programs by and for problem programs by .
The solution CFG GlassPS uses, generates program trees that are converted to strings and are then evaluated by the python interpreter. It includes rules such as,
For instance, (R1) generates code to add numbers, concatenate strings, or combine any two objects supported by the python operator. Rule (R2) creates a function of one variable, . Rule (R3) converts a string to lower-case. The grammar supports iteration through recursion. We avoid halting issues [Skiena1998] by bounding the total number of routine calls allowed. For simplicity, does not have types and uses only one non-terminal; leaving it to the learning system to learn to generate programs that do not raise exceptions. Note that our CFG formulation does not support us directly “reaching in” to the scoring program, e.g., to extract constants, though such functionality could be added to our system.
The problem CFG GlassPS uses, also generates code to be evaluated by the python interpreter. contains multiple non-terminals, roughly grouped by python type, so as to facilitate generating well-formed scoring programs.
Program Tree. Glass-box/Solution programs are derived from the corresponding CFG () and are represented as rooted trees in which each node is associated with a rule from the CFG. Problem/Solution trees are constructed top-down probabilistically, sampling from the rule probabilities of or respectively. The choice of trees for representing programs is convenient, as it is easy to extract features from trees for machine learning purposes.
Learning from (practice problems, solutions).
It is natural to try to learn to synthesize programs based on a collection of problem-solution pairs. To do so, one would ideally have access to a large repository of samples of problem and solution programs. Menon et al. (menon2013machine) provide a small set of problem/solution pairs for text processing PBE. Since the set is relatively small, they use domain knowledge to construct a small number of hand-coded features for learning.
Instead, similarly to Balog et al. (balog2016deepcoder), we synthesize practice problems of our own, and synthesize solutions to these problems. In our glass-box synthesis approach, this amounts to synthesizing scorers, i.e., the glass-box problems. For each such scorer, we synthesize a number of solution programs specific to that scorer and choose the highest scoring program. We then learn from this collection of problem-solution pairs to improve the model used in synthesis. We iteratively find improved solutions to these problems by interleaving search and training a model that helps guide the search intelligently. An overview of our framework is shown in Figure 2 and the specifics are described next.
Synthesizing practice problems. Since glass-box problems are programs themselves, they can be represented as program trees and they are synthesized by randomly expanding nodes from the CFG (uniform probabilities), on which the set of problems can be expressed. Duplicate problems are removed, and the synthesized problems are divided into training and test sets, with a 90-10 random train/test split. The training (practice) problems are denoted and the test problems .
Challenge Problems. Apart from the practice problems that we synthesize, and the set of test problems held out from this pool of synthesized problems for validation, we also use “challenge” problems. Challenge problems are ten problems that we created manually, written in terms of , and intended to be “representative” of programming challenges that are naturally expressed as glass-box synthesis problems. These problems are given in the Experiments Section.
Synthesizing solutions. For each practice/test/challenge problem, a fixed number of candidate solution program trees are synthesized top-down probabilistically using the CFG , and the best one, according to the scoring function of the problem at hand, is chosen.
Every rule of the CFG is associated with a probability. A program, which is a collection of rules, has as probability the product of probabilities of the rules that comprise it. Before learning, the probabilities of the rules in are equal (uniformly random expansion). The purpose of using machine learning is to learn the problem-specific rule probabilities of successfully, so as to make the successful solution programs more probable, and thus easier to find. In what follows, we specify how to formalize the learning to write programs that optimize glass-box functions as a machine learning problem.
As input to the machine learning classifier, we use two types of features:glass-box problem features and context features. The problem features used for learning are a “bag-of-rules” representation of the problem. That is to say, for each problem rule in , there is a feature for its number of occurrences in the problem’s program tree. By context
we refer to the partial candidate program created so far. Specifically, since synthesis is top-down, we mean the path of rules from the root of the generated tree to the current node whose rule is to be assigned. For simplicity, to represent the context, we only use the one-hot-encoding of the parent node’s rule and the one-hot encoding of what the current node’s child index is (i.e., is it the first/second/etc. child of the parent node). Thus, the one-hot-encoding vectors of the glass-box problem, the parent-node-rule and the child-index are concatenated, and comprise the inputto the learner.222Although the bag-of-word representation may seem simplistic, as it loses the program structure information, preliminary experiments using richer representations, such as sequence models, did not seem to offer much value to trade-off the computational expenses.
Learning rule probabilities of . For every node of a candidate solution program tree, we want to predict using the derived features described above, which rule of is the most probable. Thus, the problem of program inference, i.e., searching over the space of programs expressed by CFG , can be reduced to a multi-label classification problem, as a solution program is a collection of rules that are simultaneously present. This can be further reduced to multi-class classification, predicting for each class-rule separately whether it should be present or not. The number of classes equals .
We represent the target label of the correct rule in the successful solution program for glass box problem as a one-hot-encoding vector with 1 in the the next node rule present in , and 0 in the other entries.
Given the general formulation, any multi-class classifier is applicable. In this work, for the purposes of a proof-of-concept illustration, we use a multi-class logistic regression model with parameters ; in particular, we learn such parameter vectors, one for each class. An illustration of the mapping of features to solution rule is shown in Figure 3.
It is important to note that the parameters are learned based on a constantly updated dataset consisting of the thus far found successful (problem, solution) program pairs. Given the featurization procedure described, each found successful solution program with e.g. nodes contributes samples for the learning of the classifier; for each of these samples the input features will differ as the context features will change.
Scoring Solution Programs. In order to include a successful problem-solution pair in the constantly updated training dataset, a notion of success needs to be specified. Recall that a solution program is successful if it maximizes the score of the corresponding problem. Although our framework is entirely capable of handling arbitrary continuous scoring functions, the nature of the problems induced by the grammars used in our experiments is such that all scoring programs return scores in except for a solution that throws an exception, in which case the score is . This makes it easier to evaluate our system as the average score indicates the fraction of problems optimally solved. With iterations of learning, given that are used to inform the problem specific search over , the poor-scoring programs become less likely.
Learning to write typed programs. While it would be nice if GlassPS generated a subset of python programs, in fact it initially generates many nonsensical (un-typable) programs because it uses only a single terminal . Thus, unsurprisingly, at first the grammar mostly generates programs that raise exceptions by doing things like trying to lower-case the number 1. As a result, our learning system, utilizing the of so far found pairs of glass box problems-solutions, will progressively learn to compose well-formed programs, in addition to learning to solve problems.
To summarize our framework, a formal description of the procedure followed is given in Algorithm 1. Our system operates on iterations . At round, GlassPS calls the SOLVE module, in order to attempt to solve the test problems, which are synthesized based on . In SOLVE, the system generates its own practice problems from the CFG , using uniform probabilities. To solve these generated practice problems, the system calls the TRAIN module. The goal of TRAIN is to a) construct a training dataset of problems - solutions, so that b) the learner’s parameters are learned. To achieve a), the system uses the parameters of the previous round’s logistic regression model, to guide how the program trees of the solution programs will be built (SEARCH). Specifically, for every node of the under construction solution program tree, features are extracted via the module FEATURIZE (discussed in the previous Section), which creates the bag-of-word features for the problem and context. Given these features, the current learner’s model is used to predict, what the probability of every rule of the solution CFG is (inside the module LR-predict). These inferred probabilities are used as per rule weights to perform weighted sampling for which rule should be next in the candidate program tree. This is how the learning guides the search over programs.
Via this procedure, for every problem, a set of candidate solution programs is constructed. Each candidate program is scored by the corresponding scoring program . The programs which successfully solve the respective glass-box problems, i.e., with the maximum score using shortest length to break the ties, are used to construct a (problem, context) solution rule training data set, calling again the FEATURIZE routine for the representation of (problem, context). This constructed training dataset is then used to learn a new logistic regression model, which will be used to find solutions for the test and challenge problems, and which will be subsequently used in the next learning round to guide the search over the candidate solution programs.
Illustrative Example: how guided search works
Before we move to our experimental results, to better understand how our learning framework works, let us demonstrate how we could successfully synthesize the solution program of computing the greatest common divisor (GCD) of two positive integers using our framework GlassPS.
The particular glass-box scoring function for GCD is , which has maximum score , achieved when is a factor of both and . In python, this program can be written succinctly as lambda m, n, y: ntprog(-mod(m, y) or (-mod(n, y) or y)), where ntprog computes the score over a domain of pairs. It also implements an early stopping optimality criteria, which improves efficiency without changing the behavior of the algorithm.
The successful solution program that we are after is def f(m, n): f(mod(n, m), m) if n else m)), i.e., a recursive form of Euclid’s GCD algorithm.
For illustration purposes, let us consider the toy CFGs shown in Table 1, which are actually subsets of the grammars used in our experiments. They contain rules typically useful for number theory problems.
Calling the FEATURIZE module of Algorithm 1, the glass-box problem of GCD is written as a one-hot-encoding vector , with non-zero values , where the values are the number of occurrences of the respective rule.
Recall that to decide which rule will become the next program tree node, a learned Logistic Regression (LR) model based on the so far found pairs of problems-solutions will be used in inference mode, to predict given the problem & context features , which target class (i.e., rule of the solution CFG) is the most likely. Thus, to construct the successful finding-GCD program tree top-down, guided search will proceed by sampling rules from for the next node using the predicted LR probabilities:
For the root node: the features will be concatenated with an all-zero vector, as the root does not have a parent. The target sampled class , i.e., the rule with the maximum sampled predicted probability from LR, should be (one-hot-encoding with 1 in the entry corresponding to ).
For the second node, will be , , where is the same as for the previous node, is the one-hot encoding vector with 1 in (as is the rule of the parent node), and is the one-hot vector with 1 in the first entry, as this is the first child. The sampled target class from the predicted probability vector should be (the if rule).
Continue for each program tree node, until no more nodes are to be expanded, i.e., they are leaf nodes, and the entire program tree is constructed.
In short, the LR model should learn that the rules should have high probability under the and the certain parent and child index contexts. When finding corresponding to the glass-box GCD , the pair is added to the training dataset used to learn next round’s LR.
We use as evaluation metric the fraction of test problems successfully solved. The sets of practice/train and test problems do not change throughout the experiment. Programs were limited to be at most size 20 nodes. For generating solutions, instead of generating programs independently at random (which results in a vast majority of duplicates), we generate the programs that are most likely using the search algorithm of Menon et al. menon2013machine, which does not produce duplicates. Since problem generation is not a bottleneck, problems are generated more simply by sampling uniformly from the grammarand then removing the duplicates.
We consider CFGs from the following domains:
Number Theory. Example target problems include GCD or finding the largest non-trivial factor of a (small) number, using a brute force approach. Numerous trivial functions also arise in the practice set, such as given two numbers, output the smaller of the two.
Finding Roots. Example target problems are finding the root of algebra expressions such as , i.e., solve for as a function of .
Summation Formulas. Example target problems are finding the closed-form expression of computing the sum of a function of the first numbers . For example .
Strings. For simplicity, we considered problems where the desired output is a single character from the input meeting a certain objective. An example problem is finding the most frequent character in a string or finding the alphabetically first character in a string.
|Challenge problem number||1||2||3||4||5|
|Solution time without learning||30:00||2:15||2:15||0:05||2:15|
|Solution time with all-domain learning||2:15||2:15||0:01||0:01||0:01|
|Solution time with domain-specific learning||0:10||2:15||0:01||0:01||0:01|
|Challenge problem number||6||7||8||9||10|
|Solution time without learning||2:15||2:15||2:15||2:15||2:15|
|Solution time with all-domain learning||0:03||2:15||0:01||2:15||0:01|
|Solution time with domain-specific learning||0:01||0:05||0:01||2:15||0:01|
Verification. For each of these problems, the performance is evaluated on a bounded domain of possible inputs and range of possible outputs, which makes it easy to check for optimality, and enables early stopping for efficiency.
Grammars. Briefly, our grammars, both for our typed program-scoring grammar and of our solution grammar , include operators such as , , , , , math functions such as pow, abs, tan, tanh, arctanh, log, exp, constants such as 0, 1, 2, or the passed arguments, operators such as , max, boolean operators such as or, if expressions, string functions such as startswith, endwith, count, upper, lower, ord. Additionally, the grammar contains recursive functions (with a single, or two arguments passed), operators on lists, sets and tuples.
Each generated program was converted to a string and passed to the native python eval. A timeout of 1 second was used per evaluation.
Results on Practice/ Test Problems
To empirically evaluate our proposed framework, we conducted experiments under two experimental conditions. In the first condition, we considered for the context-free grammars, only the one relevant to the problems we were trying to solve. For example, for solving string problems, we considered the string CFG, and so on. In the second condition we took the union of all grammar rules from our four different domains, and considered 250 problems from each of the four domains. We refer to this as the all-domain experiment.
For the first condition, we considered 1,000 practice train problems and 100 test problems. We progressively performed four training rounds, as more rounds did not seem to improve learning. Each round of training was run for 45 minutes. We show in Figure 4(a-c) the fraction of practice train/test problems solved by GlassPS as rounds of learning progress from 0 to 4. We can see that performing 4 rounds of our framework typically solves about 70% of the target problems. The string domain is not shown because 100% of the problems were solved in each round, even prior to learning.
For the second setup of the all-domain experiment, we show the results in Figure 4(d). In order to achieve similar fraction of train/test problems solved as in the first setup, every training round was run for double time, i.e., 90 mins instead of 45. We conclude that although better results can be achieved with domain-specific learning, the all-domain experiment is a proof of concept experiment that even without knowing the domain, GlassPS can improve itself over time.
Results on Challenge problems
Finally, we consider the performance of our framework in our following ten challenge problems.
GCD (number theory),
Greatest non-trivial factor of a number .333The time limits were such that a brute-force loop could find the largest factor in the alloted time, so no advanced factoring algorithms were necessary. (number theory),
Most frequent character in a string (strings),
Alphabetically first character in a string (strings),
Solve for : (roots),
Solve for : (roots),
Find a closed-form expression for: sum1(1, n, lambda i: (i * i)) (sums),
Find a closed-form expression for: sum1(1, n, lambda i: (i *(i * i))) (sums),
Find a closed-form expression for: sum1(1, n, lambda i: pow(2, (-i))) (sums),
Find a closed-form expression for: sum1(1, n, lambda i: 1 / (1, (i * (1 + i)))) (sums).
In Table 2, we report the time in hours and minutes needed to solve each of the 10 challenge problems with our system. We can compare the times needed for (i) when the search is not guided by learning, i.e., brute-force (2nd row), (ii) the all-domain setup (3rd row), and (iii) domain-specific learning (4th row), which is the best performing one across problems.
Notably, our framework can solve problems that would be otherwise prohibitively slow without learning. Notably, GCD was found after 30 hours with no learning, while it took only 10 minutes using domain-specific learning, and 2 hours and 15 minutes for the all-domain experiment.
Conclusions & Future Directions
In this paper, we introduced a general framework for learning to write computer programs that maximize the score of glass-box objective functions.
We have formulated this as a machine learning problem, showing as a proof-of-concept that a simple classifier such as logistic regression, can successfully learn patterns among the features of the scoring programs, and the features of the generated solution programs to be scored.
We have shown experimentally that our framework learns over time to generate python typed code that solves problems across domains, even though we did not provide types and we did not give access to the domain type each problem comes from.
While the learning could certainly be improved (e.g. using deep learning), the approach is shown to be sound and to some extent flexible enough to combine multiple domains of synthesis in a single system. This opens up interesting directions for multi-domain learning systems that learn to solve problems.
Acknowledgements. The authors would like to thank Huseyn Melih Elibol for the useful discussions.
- [Andrychowicz and Kurach2016] Andrychowicz, M., and Kurach, K. 2016. Learning efficient algorithms with hierarchical attentive memory. arXiv preprint arXiv:1602.03218.
- [Balog et al.2016] Balog, M.; Gaunt, A. L.; Brockschmidt, M.; Nowozin, S.; and Tarlow, D. 2016. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989.
- [Cai, Shin, and Song2017] Cai, J.; Shin, R.; and Song, D. 2017. Making neural programming architectures generalize via recursion. arXiv preprint arXiv:1704.06611.
- [Dechter et al.2013] Dechter, E.; Malmaud, J.; Adams, R. P.; and Tenenbaum, J. B. 2013. Bootstrap learning via modular concept discovery. IJCAI.
- [Devlin et al.2017] Devlin, J.; Uesato, J.; Bhupatiraju, S.; Singh, R.; Mohamed, A.-r.; and Kohli, P. 2017. Robustfill: Neural program learning under noisy i/o. arXiv preprint arXiv:1703.07469.
- [Gaunt et al.2016a] Gaunt, A. L.; Brockschmidt, M.; Kushman, N.; and Tarlow, D. 2016a. Lifelong perceptual programming by example. arXiv preprint arXiv:1611.02109.
- [Gaunt et al.2016b] Gaunt, A. L.; Brockschmidt, M.; Singh, R.; Kushman, N.; Kohli, P.; Taylor, J.; and Tarlow, D. 2016b. Terpret: A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428.
[Graves et al.2016]
Graves, A.; Wayne, G.; Reynolds, M.; Harley, T.; Danihelka, I.;
Grabska-Barwińska, A.; Colmenarejo, S. G.; Grefenstette, E.; Ramalho, T.;
Agapiou, J.; et al.
Hybrid computing using a neural network with dynamic external memory.Nature.
- [Graves, Wayne, and Danihelka2014] Graves, A.; Wayne, G.; and Danihelka, I. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.
- [Gulwani, Harris, and Singh2012] Gulwani, S.; Harris, W. R.; and Singh, R. 2012. Spreadsheet data manipulation using examples. Communications of the ACM 55(8):97–105.
- [Gulwani2012] Gulwani, S. 2012. Synthesis from examples: Interaction models and algorithms. 8–14. IEEE.
- [Joulin and Mikolov2015] Joulin, A., and Mikolov, T. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. 190–198.
- [Lake, Salakhutdinov, and Tenenbaum2015] Lake, B. M.; Salakhutdinov, R.; and Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338.
- [Manna and Waldinger1980] Manna, Z., and Waldinger, R. 1980. A deductive approach to program synthesis. ACM Transactions on Programming Languages and Systems (TOPLAS) 2(1):90–121.
- [Menon et al.2013] Menon, A. K.; Tamuz, O.; Gulwani, S.; Lampson, B. W.; and Kalai, A. 2013. A machine learning framework for programming by example. ICML (1) 28:187–195.
- [Mikolov, Joulin, and Baroni2015] Mikolov, T.; Joulin, A.; and Baroni, M. 2015. A roadmap towards machine intelligence. arXiv preprint arXiv:1511.08130.
- [Neelakantan, Le, and Sutskever2015] Neelakantan, A.; Le, Q. V.; and Sutskever, I. 2015. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834.
- [Parisotto et al.2016] Parisotto, E.; Mohamed, A.-r.; Singh, R.; Li, L.; Zhou, D.; and Kohli, P. 2016. Neuro-symbolic program synthesis. arXiv preprint arXiv:1611.01855.
- [Raza, Gulwani, and Milic-Frayling2015] Raza, M.; Gulwani, S.; and Milic-Frayling, N. 2015. Compositional program synthesis from natural language and examples.
- [Reed and de Freitas2015] Reed, S., and de Freitas, N. 2015. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279.
- [Skiena1998] Skiena, S. 1998. The algorithm design manual.
- [Solar-Lezama2008] Solar-Lezama, A. 2008. Program synthesis by sketching.
- [Yessenov et al.2013] Yessenov, K.; Tulsiani, S.; Menon, A.; Miller, R. C.; Gulwani, S.; Lampson, B.; and Kalai, A. 2013. A colorful approach to text processing by example. 495–504.
- [Zaremba and Sutskever2014] Zaremba, W., and Sutskever, I. 2014. Learning to execute. arXiv preprint arXiv:1410.4615.