Source code is ubiquitous, and a great deal of human effort goes into developing it. An important goal is to develop tools that make the development of source code easier, faster, and less error-prone, and to develop tools that are able to better understand pre-existing source code. To date this problem has largely been studied outside of machine learning. Many problems in this area do not appear to be well-suited to current machine learning technologies. Yet, source code is some of the most widely available data with many public online repositories. Additionally, massive open online courses (MOOCs) have begun to collect source code homework assignments from tens of thousands of students(huang2013syntactic). At the same time, the software engineering community has recently observed that it is useful to think of source code as natural—written by humans and meant to be understood by other humans (hindle2012naturalness). This natural source code (NSC) has a great deal of statistical regularity that is ripe for study in machine learning.
The combination of these two observations—the availability of data, †††Work done primarily while author was an intern at Microsoft Research.
and the presence of amenable statistical structure—has opened up the possibility that machine learning tools could become useful in various tasks related to source code. At a high level, there are several potential areas of contributions for machine learning. First, code editing tasks could be made easier and faster. Current autocomplete suggestions rely primarily on heuristics developed by an Integrated Development Environment (IDE) designer. With machine learning methods, we might be able to offer much improved completion suggestions by leveraging the massive amount of source code available in public repositories. Indeed,hindle2012naturalness have shown that even simple -gram models are useful for improving code completion tools, and nguyen2013statistical have extended these ideas. Other related applications include finding bugs (kremenek2007factor), mining and suggesting API usage patterns (bruch2009learning; nguyen2012graphbased; wang2013mining), as a basis for code complexity metrics (allamanis2013mining), and to help with enforcing coding conventions (allamanis2014learning). Second, machine learning might open up whole new applications such as automatic translation between programming languages, automatic code summarization, and learning representations of source code for the purposes of visualization or discriminative learning. Finally, we might hope to leverage the large amounts of existing source code to learn improved priors over programs for use in programming by example (halbert1984programming; gulwani2011automating) or other program induction tasks.
One approach to developing machine learning tools for NSC is to improve specific one-off tasks related to source code. Indeed, much of the work cited above follows in this direction. An alternative, which we pursue here, is to develop a generative model of source code with the aim that many of the above tasks then become different forms of query on the same learned model (e.g., code completion is conditional sampling; bug fixing is model-based denoising; representations may be derived from latent variables (hinton2006reducing)
or from Fisher vectors(jaakkola98exploiting)). We believe that building a generative model focuses attention on the challenges that source code presents. It forces us to model all aspects of the code, from the high level structure of classes and method declarations, to constraints imposed by the programming language, to the low level details of how variables and methods are named. We believe building good models of NSC to be a worthwhile modelling challenge for machine learning research to embrace.
In overview we introduce notation and motivate the requirements of our models—they must capture the sequential and hierarchical nature of NSC, naturalness, and code-specific structural constraints. In Sections 3 and 4
we introduce Log-bilinear Tree-Traversal models (LTTs), which combine natural language processing models of trees with log-bilinear parameterizations, and additionally incorporate compiler-like reasoning. In sec:inference_and_learning we discuss how efficiently to learn these models, and in sec:experiments we show empirically that they far outperform the standard NLP models that have previously been applied to source code. As an introductory result, fig:intro_fig shows samples generated by a Probabilistic Context Free Grammar (PCFG)-based model (fig:intro_fig (b)) versus samples generated by the full version of our model (fig:intro_fig (b)). Although these models apply to any common imperative programming language like C/C++/Java, we focus specifically on C#. This decision is based on (a) the fact that large quantities of data are readily available online, and (b) the recently released Roslyn C# compiler(ROSLYN) exposes APIs that allow easy access to a rich set of internal compiler data structures and processing results.
2 Modelling Source Code
In this section we discuss the challenges in building a generative model of code. In the process we motivate our choice of representation and model and introduce terminology that will be used throughout.
Hierarchical Representation. The first step in compilation is to lex code into a sequence of tokens, . Tokens are strings such as “sum”, “.”, or “int” that serve as the atomic syntactic elements of a programming language. However, representing code as a flat sequence leads to very inefficient descriptions. For example, in a C# for loop, there must be a sequence containing the tokens for, (, an initializer statement, a condition expression, an increment statement, the token ), then a body block. A representation that is fundamentally flat cannot compactly represent this structure, because for loops can be nested. Instead, it is more efficient to use the hierarchical structure that is native to the programming language. Indeed, most source code processing is done on tree structures that encode this structure. These trees are called abstract syntax trees (ASTs) and are constructed either explicitly or implicitly by compilers after lexing valid sequences of code. The leaf nodes of the AST are the tokens produced by the lexer. The internal nodes are specific to a compiler and correspond to expressions, statements or other high level syntactic elements such as Block or ForStatement. The children tuple of an internal node is a tuple of nodes or tokens. An example AST is shown in fig:ast. Note how the EqualsValueClause node has a subtree corresponding to the code = sum. Because many crucial properties of the source code can be derived from an AST, they are a primary data structure used to reason about source code. For example, the tree structure is enough to determine which variables are in scope at any point in the program. For this reason we choose the AST as the representation for source code and consider generative models that define distributions over ASTs.
Modelling Context Dependence. A PCFG seems like a natural choice for modelling ASTs. PCFGs generate ASTs from the root to the leaves by repeatedly sampling children tuples given a parent node. The procedure recurses until all leaves are tokens producing nodes in a depth-first traversal order and sampling children tuples independently of the rest of the tree. Unfortunately this independence assumption produces a weak model; fig:intro_fig (a) shows samples from such a model. While basic constraints like matching of parentheses and braces are satisfied, most important contextual dependencies are lost. For example, identifier names (variable and method names) are drawn independently of each other given the internal nodes of the AST, leading to nonsense code constructions.
The first source of context dependence in NSC comes from the naturalness of software. People have many stylistic habits that significantly limit the space of programs that we might expect to see. For example, when writing nested for loops, it is common to name the outer loop variable i and the inner loop variable j. The second source of context dependence comes from additional constraints inherent in the semantics of code. Even if the syntax is context free, the fact that a program conforms to the grammar does not ensure that a program compiles. For example, variables must be declared before they are used.
Our approach to dealing with dependencies beyond what a PCFG can represent will be to introduce traversal variables that evolve sequentially as the nodes are being generated. Traversal variables modulate the distribution over children tuples by maintaining a representation of context that depends on the state of the AST generated so far.
3 Log-bilinear Tree-Traversal Models
LTTs are a family of probabilistic models that generate ASTs in a depth-first order (Algorithm 1). First, the stack is initialized and the root is pushed (lines 1-4). Elements are popped from the stack until it is empty. If an internal node is popped (line 6), then it is expanded into a children tuple and its children are pushed onto the stack (lines 10-11). If a token is popped, we label it and continue (line 14). This procedure has the effect of generating nodes in a depth-first order. In addition to the tree that is generated in a recursive fashion, traversal variables are updated whenever an internal node is popped (line 9). Thus, they traverse the tree, evolving sequentially, with each corresponding to some partial tree of the final AST. This sequential view will allow us to exploit context, such as variable scoping, at intermediate stages of the process (see sec:extending).
Algorithm 1 produces a sequence of internal nodes , traversal variables , and the desired tokens . It is defined by three distributions: (a) the prior over the root node and traversal variables, ; (b) the distribution over children tuples conditioned on the parent node and , denoted ; and (c) the transition distribution for the s, denoted
. The joint distribution over the elements produced by Algorithm1 is
Thus, LTTs can be viewed as a Markov model equipped with a stack—a special case of a Probabilistic Pushdown Automata (PPDA)(abney1999relating). Because depth-first order produces tokens in the same order that they are observed in the code, it is particularly well-suited. We note that other traversal orders produce valid distributions over trees such as right-left or breadth-first. Because we compare to sequential models, we consider only depth-first.
Parameterizations. Most of the uncertainty in generation comes from generating children tuples. In order to avoid an explosion in the number of parameters for the children tuple distribution , we use a log-bilinear form. For all distributions other than we use a simple tabular parameterization.
The log-bilinear form consists of a real-valued vector representation of pairs, , a real-valued vector representation for the children tuple, , and a bias term for the children, . These are combined via an inner product, which gives the negative energy of the children tuple
As is standard, this is then exponentiated and normalized to give the probability of sampling the children: We take the support over which to normalize this distribution to be the set of children tuples observed as children of nodes of type in the training set.
The representation functions rely on the notion of an matrix that can be indexed into with objects to look up dimensional real-valued vectors. denotes the row of the matrix corresponding to any variable equal to . For example, if and , then . These objects may be tuples and in particular . Similarly, looks up a real number. In the simple variant, each unique sequence receives the representation and . The representations for pairs are defined as sums of representations of their components. If is a collection of variables ( representing the th variable at the th step) then
The s are matrices (diagonal for computational efficiency) that modulate the contribution of a variable in a position-dependent way. In other variants the children tuple representations will also be defined as sums of their component representations. The log-bilinear parameterization has the desirable property that the number of parameters grows linearly in the dimension of , so we can afford to have high dimensional traversal variables without worrying about exponentially bad data fragmentation.
4 Extending LTTs
The extensions of LTTs in this section allow (a) certain traversal variables to depend arbitrarily on previously generated elements of the AST; (b) annotating nodes with richer types; and (c) letting be compositionally defined, which becomes powerful when combined with deterministic reasoning about variable scoping.
We distinguish between deterministic and latent traversal variables. The former can be computed deterministically from the current partial tree (the tree nodes and tokens that have been instantiated at step ) that has been generated while the latter cannot. To refer to a collection of both deterministic and latent traversal variables we continue to use the unqualified “traversal variables” term.
Deterministic Traversal Variables. In the basic generative procedure, traversal variables satisfy the first-order Markov property, but it is possible to condition on any part of the tree that has already been produced. That is, we can replace by in (1). Inference becomes complicated unless these variables are deterministic traversal variables (inference is explained in sec:inference_and_learning) and the unique value that has support can be computed efficiently. Examples of these variables include the set of node types that are ancestors of a given node, and the last tokens or internal nodes that have been generated. Variable scoping, a more elaborate deterministic relationship, is considered later.
Annotating Nodes. Other useful features may not be deterministically computable from the current partial tree. Consider knowing that a BinaryExpression will evaluate to an object of type int. This information can be encoded by letting nodes take values in the cross-product space of the node type space and the annotation space. For example, when adding type annotations we might have nodes take value (BinaryExpression, int) where before they were just BinaryExpression. This can be problematic, because the cardinality of the parent node space increases exponentially as we add annotations. Because the annotations are uncertain, this means there are more choices of node values at each step of the generative procedure, and this incurs a cost in log probabilities when evaluating a model. Experimentally we found that simply annotating expression nodes with type information led to worse log probabilities of generating held out data: the cost of generating tokens decreased because the model had access to type information, the increased cost of generating type annotations along with nodetypes outweighed the improvement.
Identifier Token Scoping. The source of greatest uncertainty when generating a program are children of IdentifierToken nodes. IdentifierToken nodes are very common and are parents of all tokens (e.g. variable and method names) that are not built-in language keywords (e.g., IntKeyword or EqualsToken) or constants (e.g., StringLiterals). Knowing which variables have previously been declared and are currently in scope is one of the most powerful signals when predicting which IdentifierToken will be used at any point in a program. Other useful cues include how recently the variable was declared and what the type the variable is. In this section we a scope model for LTTs.
Scope can be represented as a set of variable feature vectors corresponding to each to a variable that is in scope.111Technically, we view the scope as a deterministic traversal variable, but it does not contribute to . Thus, each feature vector contains a string identifier corresponding to the variable along with other features as (key, value) tuples, for example . A variable is “in scope” if there is a feature vector in the scope set that has a string identifier that is the same as the variable’s identifier.
When sampling an identifier token, there is a two step procedure. First, decide whether this identifier token will be sampled from the current scope. This is accomplished by annotating each IdentifierToken internal node with a binary variable that has the states global or local. If local, proceed to use the local scope model defined next. If global, sample from a global identifier token model that gives support to all identifier tokens. Note, we consider the global option a necessary smoothing device, although ideally we would have a scope model complete enough to have all possible identifier tokens.
The scope set can be updated deterministically as we traverse the AST by recognizing patterns that correspond to when variables should be added or removed from the scope. We implemented this logic for three cases: parameters of a method, locally declared variables, and class fields that have been defined prior in the class definition. We do not include class fields defined after the current point in the code, and variables and methods available in included namespaces. This incompleteness necessitates the global option described above, but these three cases are very common and cover many interesting cases.
Given the scope set which contains variable feature vectors and parent node (IdentifierToken, local), the probability of selecting token child is proportional to , where we normalize only over the variables currently in scope. Specifically, we let and be defined as follows:
For example, if a variable in scope has feature vector (numNodes, (type, int) , (recently-declared, 0)), then its corresponding would be a context matrix-modulated sum of representations , , and . This representation will then be combined with the context representation as in the basic model. The string identifier feature is the same object as token nodes of the same string, thus they share their representation.
5 Inference and Learning in LTTs
In this section we briefly consider how to compute gradients and probabilities in LTTs.
At this point it is useful to distinguish between two cases. In the first case, all traversal variables are deterministic, and computation will be very straightforward. In the second case, latent traversal variables are allowed, and we will need to use dynamic programming to compute log probabilities and expectation maximization (EM) for learning.
Only Deterministic Traversal Variables. If all traversal variables can be computed deterministically from the current partial tree, we use the compiler to compute the full AST corresponding to program
. From the AST we compute the only valid setting of the traversal variables. Because both the AST and the traversal variables can be deterministically computed from the token sequence, all variables in the model can be treated as observed. Since LTTs are directed models, this means that the total log probability is a sum of log probabilities at each production, and learning decomposes into independent problems at each production. Thus, we can simply stack all productions into a single training set and follow standard gradient-based procedures for training log-bilinear models. More details will be described in sec:experiments, but generally we follow the Noise-Contrastive Estimation (NCE) technique employed inMnihTeh2012.
Latent Traversal Variables. In the second case, we allow latent traversal variables that are not deterministically computable from the AST. In this case, the traversal variables couple the learning across different productions from the same tree. For simplicity and to allow efficient exact inference, we restrict these latent traversal variables to just be a single discrete variable at each step (although this restriction could easily be lifted if one was willing to use approximate inference). Because the AST is still a deterministic function of the tokens, computing log probabilities corresponds to running the forward-backward algorithm over the latent states in the depth-first traversal of the AST. We can formulate an EM algorithm adapted to the NCE-based learning of log-bilinear models for learning parameters. The details of this can be found in the Supplementary Material.
6 Related Work
The LTTs described here are closely related to several existing models. Firstly, a Hidden Markov Model (HMM) can be recovered by having all children tuples contain a token and a Next node, or just a token (which will terminate the sequence), and having a single discrete latent traversal variable. If the traversal variable has only one state and the children distributions all have finite support, then an LTT becomes equivalent to a Probabilistic Context Free Grammar (PCFG). PCFGs and their variants are components of state-of-the-art parsers of English(mcclosky2006effective), and many variants have been explored: internal node annotation charniak1997statistical and latent annotations matsuzaki2005probabilistic. Aside from the question of the order of the traversals, the traversal variables make LTTs special cases of Probabilistic Pushdown Automata (PPDA) (for definition and weak equivalence to PCFGs, see abney1999relating). Log-bilinear parameterizations have been applied widely in language modeling, for -gram models (saul1997aggregate; MnihHinton2007; MnihTeh2012) and PCFG models (charniak2000maxentparse; klein2002fast; titov2007incrbayesnets; henderson2010incrsigbeliefnets). To be clear, our claim is not that general Tree Traversal models or the log-bilinear paremeterizations are novel; however, we believe that the full LTT construction, including the tree traversal structure, log-bilinear parameterization, and incorporation of deterministic logic to be novel and of general interest.
It is true in general that PPDAs and PCFGs are equivalent classes of distributions over terminal tokens, although they are subject to different inductive biases (abney1999relating). Yet, if the traversal variables of LTTs are latent and marginalized, then the resulting model is not context free with respect to the tree.
This approach can be seen as a sort of low-rank approximation to parameterizing distributions with a single number, an approach that enjoys a large literature (SalMnih08; MnihHinton2007). Approaches that reduce the number of parameters with low-rank approximations in -gram models deal effectively with analogous fragmentation issues . In fact, log-linear parameterizations have been used in PCFGs (charniak2000maxentparse) and factorized models that ameliorate fragmentation in similar ways to low-rank parameterizations have also been explored (klein2002fast).
The problem of modeling source code is relatively understudied in machine learning. We previously mentioned hindle2012naturalness and allamanis2013mining, which tackle the same task as us but with simple NLP models. Very recently, allamanis2014mining explores more sophisticated nonparametric Bayesian grammar models of source code for the purpose of learning code idioms. liang10programs
use a sophisticated non-parametric model to encode the prior that programs should factorize repeated computation, but there is no learning from existing source code, and the prior is only applicable to a functional programming language with quite simple syntax rules. Our approach builds a sophisticated and learned model and supports the full language specification of a widely used imperative programming language.
7 Experimental Analysis
In all experiments, we used a dataset that we collected from TopCoder.com. There are 2261 C# programs which make up 140k lines of code and 2.4M parent nodes in the collective abstract syntax trees. These programs are solutions to programming competitions, and there is some overlap in programmers and in problems across the programs. We created training splits based on the user identity, so the set of users in the test set are disjoint from those in the training or validation sets (but the training and validation sets share users). The overall split proportions are 20% test, 10% validation, and 70% train. The evaluation measure that we use throughout is the log probability under the model of generating the full program. All logs are base 2. To make this number more easily interpretable, we divide by the number of tokens in each program, and report the average log probability per token.
All experiments use a validation set to choose hyperparameter values. These include the strength of a smoothing parameter and the epoch at which to stop training (if applicable). We did a coarse grid search in each of these parameters and the numbers we report (for train, validation, and test) are all for the settings that optimized validation performance. For the gradient-based optimization, we used AdaGrad(duchi2011adaptive) with stochastic minibatches. Unless otherwise specified, the dimension of the latent representation vectors was set to 50. Occasionally the test set will have tokens or children tuples unobserved in the training set. In order to avoid assigning zero probability to the test set, we locally smoothed every children distribution with a default model that gives support to all children tuples. The numbers we report are a lower bound on the log probability of data under for a mixture of our models with this default model. Details of this smoothed model, along with additional experimental details, appear in the Supplementary Materials. There is an additional question of how to represent novel identifiers in the scope model. We set the representations of all features in the variable feature vectors that were unobserved in the training set to the all zeros vector.
Baselines and Basic Log-bilinear Models. The natural choices for baseline models are -gram models and PCFG-like models. In the -gram models we use additive smoothing, with the strength of the smoothing hyperparameter chosen to optimize validation set performance. Similarly, there is a smoothing parameter in the PCFG-like models that is chosen to optimize validation set performance. We explored the effect of the log-bilinear parameterization in two ways. First, we trained a PCFG model that was identical to the first PCFG model but with the parameterization defined using the standard log-bilinear parameterization. This is the most basic LTT model, with no traversal variables (LTT-). The result was nearly identical to the standard PCFG. Next, we trained a 10-gram model with a standard log-bilinear parameterization, which is equivalent to the models discussed in (MnihTeh2012). This approach dominates the basic -gram models, allowing longer contexts and generalizing better. Results appear in fig:baselines.
Deterministic Traversal Variables. Next, we augmented LTT- model with deterministic traversal variables that include hierarchical and sequential information. The hierarchical information is the depth of a node, the kind of a node’s parent, and a sequence of 10 ancestor history variables, which store for the last 10 ancestors, the kind of the node and the index of the child that would need to be recursed upon to reach the current point in the tree. The sequential information is the last 10 tokens that were generated.
In fig:hiandseq we report results for three variants: hierarchy only (LTT-Hi), sequence only (LTT-Seq), and both (LTT-HiSeq). The hierarchy features alone perform better than the sequence features alone, but that their contributions are independent enough that the combination of the two provides a substantial gain over either of the individuals.
Latent Traversal Variables. Next, we considered latent traversal variable LTT models trained with EM learning. In all cases, we used 32 discrete latent states. Here, results were more mixed. While the latent-augmented LTT (LTT-latent) outperforms the LTT- model, the gains are smaller than achieved by adding the deterministic features. As a baseline, we also trained a log-bilinear-parameterized standard HMM, and found its performance to be far worse than other models. We also tried a variant where we added latent traversal variables to the LTT-HiSeq model from the previous section, but the training was too slow to be practical due to the cost of computing normalizing constants in the E step. See fig:latent.
Scope Model. The final set of models that we trained incorporate the scope model from sec:extending (LTT-HiSeq-Scope). The features of variables that we use are the identifier string, the type, where the variable appears in a list sorted by when the variable was declared (also known as a de Bruijn index), and where the variable appears in a list sorted by when the variable was last assigned a value. Here, the additional structure provides a large additional improvement over the previous best model (LTT-HiSeq). See fig:scopes.
Analysis. To understand better where the improvements in the different models come from, and to understand where there is still room left for improvement in the models, we break down the log probabilities from the previous experiments based on the value of the parent node. The results appear in fig:tree_token_breakdowns. In the first column is the total log probability number reported previously. In the next columns, the contribution is split into the cost incurred by generating tokens and trees respectively. We see, for example, that the full scope model pays a slightly higher cost to generate the tree structure than the Hi&Seq model, which is due to it having to properly choose whether IdentifierTokens are drawn from local or global scopes, but that it makes up for this by paying a much smaller cost when it comes to generating the actual tokens.
In the Supplementary Materials, we go further into the breakdowns for the best performing model, reporting the percentage of total cost that comes from the top parent kinds. IdentifierTokens from the global scope are the largest cost (30.1%), with IdentifierTokens covered by our local scope model (10.9%) and Blocks (10.6%) next. This suggests that there would be value in extending our scope model to include more IdentifierTokens and an improved model of Block sequences.
Samples. Finally, we qualitatively evaluate the different methods by drawing samples from the models. Samples of for loops appear in fig:intro_fig. To generate these samples, we ask (b) the PCFG and (c) the LTT-HiSeq-Scope model to generate a ForStatement. For (a) the LBL -gram model, we simply insert a for token as the most recent token. We also initialize the traversal variables to reasonable values: e.g., for the LTT-HiSeq-Scope model, we initialize the local scope to include string words. We also provide samples of full source code files (CompilationUnit) from the LTT-HiSeq-Scope model in the Supplementary Material, and additional for loops. Notice the structure that the model is able to capture, particularly related to high level organization, and variable use and re-use. It also learns quite subtle things, like int variables often appear inside square brackets.
Natural source code is a highly structured source of data that has been largely unexplored by the machine learning community. We have built probabilistic models that capture some of the structure that appears in NSC. A key to our approach is to leverage the great deal of work that has gone into building compilers. The result is models that not only yield large improvements in quantitative measures over baselines, but also qualitatively produce far more realistic samples.
There are many remaining modeling challenges. One question is how to encode the notion that the point of source code is to do something. Relatedly, how do we represent and discover high level structure related to trying to accomplish such tasks? There are also a number of specific sub-problems that are ripe for further study. Our model of Block statements is naive, and we see that it is a significant contributor to log probabilities. It would be interesting to apply more sophisticated sequence models to children tuples of Blocks. Also, applying the compositional representation used in our scope model to other children tuples would interesting. Similarly, it would be interesting to extend our scope model to handle method calls. Another high level piece of structure that we only briefly experimented with is type information. We believe there to be great potential in properly handling typing rules, but we found that the simple approach of annotating nodes to actually hurt our models.
More generally, this work’s focus was on generative modeling. An observation that has become popular in machine learning lately is that learning good generative models can be valuable when the goal is to extract features from the data. It would be interesting to explore how this might be applied in the case of NSC.
In sum, we argue that probabilistic modeling of source code provides a rich source of problems with the potential to drive forward new machine learning research, and we hope that this work helps illustrate how that research might proceed forward.
We are grateful to John Winn, Andy Gordon, Tom Minka, and Thore Graepel for helpful discussions and suggestions. We thank Miltos Allamanis and Charles Sutton for pointers to related work.
Supplementary Materials for “Structured Generative Models of Natural Source Code”
EM Learning for Latent Traversal Variable LTTs
Here we describe EM learning of LTTs with latent traversal variables. Consider probability of with deterministic traversal variables and latent traversal variables (where represents the union of and ):
Firstly, the terms drop off because as above we can use the compiler to compute the AST from then use the AST to deterministically fill in the only legal values for the variables, which makes these terms always equal to 1. It then becomes clear that the sum can be computed using the forward-backward algorithm. For learning, we follow the standard EM formulation and lower bound the data log probability with a free energy of the following form (which for brevity drops the prior and entropy terms):
In the E step, the Q’s are updated optimally given the current parameters using the forward backward algorithm. In the M step, given ’s, the learning decomposes across productions. We represent the transition probabilities using a simple tabular representation and use stochastic gradient updates. For the emission terms, it is again straightforward to use standard log-bilinear model training. The only difference from the previous case is that there are now training examples for each , one for each possible value of , which are weighted by their corresponding . A simple way of handling this so that log-bilinear training methods can be used unmodified is to sample values from the corresponding distribution, then to add unweighted examples to the training set with values being given their sampled value. This can then be seen as a stochastic incremental M step.
More Experimental Protocol Details
For all hyperparameters that were not validated over (such as minibatch size, scale of the random initializations, and learning rate), we chose a subsample of the training set and manually chose a setting that did best at optimizing the training log probabilities. For EM learning, we divided the data into databatches, which contained 10 full programs, ran forward-backward on the databatch, then created a set of minibatches on which to do an incremental M step using AdaGrad. All parameters were then held fixed throughout the experiments, with the exception that we re-optimized the parameters for the learning that required EM, and we scaled the learning rate when the latent dimension changed. Our code used properly vectorized Python for the gradient updates and a C++ implementation of the forward-backward algorithm but was otherwise not particularly optimized. Run times (on a single core) ranged from a few hours to a couple days.
In order to avoid assigning zero probability to the test set, we assumed knowledge of the set of all possible tokens, as well as all possible internal node types – information available in the Roslyn API. Nonetheless, because we specify distributions over tuples of children there are tuples in the test set with no support. Therefore we smooth every by mixing it with a default distribution over children that gives broad support.
For distributions whose children are all 1-tuples of tokens, the
default model is an additively smoothed model of the empirical
distribution of tokens in the train set. For other distributions we
model the number of children in the tuple as a Poisson distribution,
then model the identity of the children independently (smoothed
For distributions whose children are all 1-tuples of tokens, the default model is an additively smoothed model of the empirical distribution of tokens in the train set. For other distributions we model the number of children in the tuple as a Poisson distribution, then model the identity of the children independently (smoothed additively).
This smoothing introduces trees other than the Roslyn AST with positive support. This opens up the possibility that there are multiple trees consistent with a given token sequence and we can no longer compute in the manner discussed in sec:inference_and_learning. Still we report the log-probability of the AST, which is now a lower bound on .
|Parent Kind||% Log prob||Count|