Neural Programming by Example

03/15/2017 ∙ by Chengxun Shu, et al. ∙ University of Newcastle NetEase, Inc 0

Programming by Example (PBE) targets at automatically inferring a computer program for accomplishing a certain task from sample input and output. In this paper, we propose a deep neural networks (DNN) based PBE model called Neural Programming by Example (NPBE), which can learn from input-output strings and induce programs that solve the string manipulation problems. Our NPBE model has four neural network based components: a string encoder, an input-output analyzer, a program generator, and a symbol selector. We demonstrate the effectiveness of NPBE by training it end-to-end to solve some common string manipulation problems in spreadsheet systems. The results show that our model can induce string manipulation programs effectively. Our work is one step towards teaching DNN to generate computer programs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Programming by Example (PBE, also called programming by demonstration, or inductive synthesis) [Lieberman2001, Cypher and Halbert1993, Gulwani2011] gives machines the ability to reason and generate new programs without substantial amount of human supervision. In PBE systems, users (often non-professional programmers) provide a machine with input-output examples of a task they would like to perform and the machine automatically infers a program to accomplish the task. The concept of PBE has been successfully used for string manipulation in spreadsheet systems such as Microsoft Excel [Gulwani et al.2015], computer-aided education [Gulwani2014] and data extracting systems [Le and Gulwani2014].

As an example, if a user provides the following input and output examples:

john@example.com john

james@company.com james

A PBE system should understand that the user would like to extract the user name from the email address. It will automatically synthesize a program Select(Split(x, ‘@’), 0), where is the input string, is to split a string according to a delimiter, and is to select a substring from an array of strings. Given a new email address jacob@test.com, the program will output the string jacob.

Lau et al. lau2003programming applied version space algebra to search for possible programs. More recent PBE methods (Gulwani 2011) mainly adopt search technique to find a composition of predefined functions (such as string Split and Concatenation) that satisfies the input-output examples. These methods create a Directed Acyclic Graph (DAG) and search through the sequences of functions that can generate the output string from a given input state. These methods can generate complex string manipulation programs effectively, but require the design of complex program synthesis algorithms.

In this paper, we propose a Deep Neural Networks (DNN) based approach to Programming by Example. We train neural networks to automatically infer programs from input-output examples. During the past few years, research on DNN has achieved significant results in a variety of fields such as computer vision

[Krizhevsky, Sutskever, and Hinton2012], speech recognition [Mohamed, Dahl, and Hinton2012]

, natural language processing

[Collobert et al.2011], and API learning [Gu2016]. Recently, researchers have explored the feasibility of applying DNN to solve programming and computation related problems [Neelakantan, Le, and Sutskever2015, Reed and de Freitas2015, Graves, Wayne, and Danihelka2014, kurach2015neural]. Our work is based on the similar idea of applying DNN to infer and execute computer programs. Different from the existing work, we target at the PBE problem and train a neural PBE model with triples of input, output and program.

Our approach, called NPBE (Neural Programming by Example), teaches DNN to compose a set of predefined atomic operations for string manipulation. Given an input-output string pair, the NPBE model is trained to synthesize a program - a sequence of functions and corresponding arguments that transform the input string to the output string. The program is generated from the atomic functions and one function may use the execution results of previous functions. Thus the model is able to compose complex programs using only several predefined operations.

We have experimentally evaluated NPBE on a large number of input-output strings for 45 string manipulation tasks and the results are encouraging. We find that our model can generalize beyond the training input/output strings and the training argument settings. Our work is one of the early attempts to apply DNN to the problem of Programming by Example.

Related Work

Reed et al. reed2015neural developed a framework called Neural Programmer-Interpreter to induce and execute programs using neural networks. Their method treats programs as embeddings and uses neural networks to generate functions and arguments for program execution. Their model is trained with the supervision of execution traces and can be used in many different scenarios. However, their model cannot be directly applied to the problem of PBE as the input to their model is the environment encoded with the task, while our work is dedicated to PBE and the input to our model are input-output examples. Neural Programmer [Neelakantan, Le, and Sutskever2015] is a neural network augmented with a set of operations that can be called over several steps. It is trained to output the result of program execution, while our model is trained to output the program represented by symbols. Neural Enquirer [Yin et al.2015] is a fully neural, end-to-end differentiable model capable of modeling and executing table query related programs. The execution of Neural Enquirer is “softly” on data table using neural networks, while our work does not apply soft execution to input-output strings. Bošnjak et al. 2016arXiv160506640B proposed a neural implementation of an abstract machine for the language Forth. It can learn program behaviour trained from input-output data. However, it requires program sketches as input.

Our model is also related to the work that uses recurrent neural networks to solve programming and computation related problems. Graves et al. graves2014neural developed Neural Turing Machine which is capable of learning and executing simple programs using an external memory. Zaremba et al. zaremba2015learning used execution traces to train recurrent neural networks to learn simple algorithms. Ling et al. ling2016latent developed a model to generate program code from natural language and structured specification. Pointer Networks

[Vinyals, Fortunato, and Jaitly2015] uses an attentional recurrent model to solve difficult algorithmic problems.

Some machine learning methods were also proposed to tackle PBE problems. Lau et al. lau2003programming applied version space algebra to efficiently search for possible programs. Given input-output pairs, genetic programming

[Banzhaf et al.1998] can evolve useful programs from candidate populations. Liang et al. liang2010learning proposed a hierarchical Bayesian approach to learn simple programs given only a few examples. Melon et al. menon2013machine used machine learning to speed up the searching for possible programs by learning weights related to textual features. Their method needs carefully designed features to reduce search space, while our method reduces search space and avoids feature engineering through learning representations using DNN.

The NPBE Model

Problem Statement: Let denote the set of strings. For an input string and an output string , our model is to induce a function , so that . The input to our model is the input-output pair , and we wish to get the correct function , so when the user input another string , we can generate the desired output without the need of user specifying the function explicitly.

Figure 1: Overall architecture of NPBE.

The proposed NPBE model (Figure 1) consists of four modules:

  • A string encoder to encode input and output strings;

  • An input-output analyzer which generates the transformation embedding describing the relationship between input and output strings;

  • A program generator which produces the function/arguments embeddings over a few steps;

  • A symbol selector to decode the function/arguments embeddings and generate human readable symbols.

The program generator will run for a few steps, each step it may access the outputs of previous steps, thus enables the model to compose powerful programs using only a few predefined atomic functions. While traditional deep learning models try to generate the output

given and eventually fit the function such that , our model learns to generate the function directly and fits the higher-order function for which and satisfies .

String Encoder

Given a string composed of a sequence of characters , the string encoder outputs a string embedding and a list of character embeddings. First each character in the sequence is mapped to the 8-dimensional raw embeddings via a randomly initialized and trainable embedding matrix. To better present and attend to a character, the context of each character is fused with the character’s raw embedding to build the character embedding. Let denote the left context of the character and as the right context of . and are respectively calculated using Equation (1) and Equation (2) with . In our implementation and are the update function of LSTM [Hochreiter1997Long].

(1)
(2)

The character embedding for each character is the combination of left/right contexts of character and itself as shown in Equation (3), where

means the concatenation of vectors,

is the element-wise max-pooling.

and are parameter matrix and vector for building the full character embedding.

(3)

will be used by the attention mechanism in the program generator.

We also need a representation which can summarize the whole string, so we can induce the transformation from the string embeddings of input and output strings. We use multilayer bidirectional LSTM [Graves2005Framewise] to summarize . The output of forward and backward LSTM at each layer are concatenated and become the input of next layer’s forward and backward LSTM. The topmost layer’s last hidden states of forward and backward LSTM are merged to generate the string embedding through a final fully connected layer. The processing for input and output strings is separated but shares the same neural network architecture and parameters, thus producings and for input and output strings, respectively.

Input/output Analyzer

The input/output analyzer converts the input and output string embeddings to the transformation embedding, which describes the relationship between the input and output strings. Let denote the transformation embedding. are the input and output string embeddings, respectively. The input/output analyzer can be represented as Equation (4). In our implementation,

is just a 2-layer fully connected neural network with activation function

.

(4)
Figure 2: Diagram of the program generator.

Program Generator

Function Name Arguments Return Type Description
Split (String str, Char delim) String[] Split string using as a delimiter, return an array of strings
Join (String[] strs, Char delim) String Join an array of strings using as a delimiter, return a string
Select (String[] strs, Int index) String Select an element of a string array, return a string
ToLower (String str) String Turn a string into lower case
ToUpper (String str) String Turn a string into upper case
Concatenate (String str or Char c, …) String Concatenate a sequence of strings or characters into a string
GetConstString (null) String Special function indicating a constant string
NoFunc (null) null Special symbol indicating no more function is needed
Table 1: Atomic functions.

The program generator (Figure 2) is the core of NPBE. It generates several functions and corresponding arguments’ embeddings step by step. The function’s embedding at time is calculated as Equation (5), which is a fully connected neural network taking as input the transformation embedding and the execution history of the program generator (). Similarly, the function’s arguments embedding at time is calculated as Equation (6

). However, the function’s arguments are often very complex and hard to predict accurately. So attention mechanism (similar to the one used in neural machine translation models

[bahdanau2014neural]) is applied to refine the raw arguments embedding with attention on input and output strings. This is summarized in Equation (7). Finally, the function embedding , the refined arguments embedding , and the previous history embedding are merged into the new history embedding as shown in Equation (8). and are parameter matrix and vector for generating the new history, respectively.

(5)
(6)
(7)
(8)

Functions and can be multilayer fully connected neural networks, and in our experiment we just use one layer neural network. The function in Equation (7) is implemented by attending to the input and output strings as follows:

(9)
(10)
(11)
(12)
(13)
(14)

and are parameters for attending to the input and output strings, respectively. Note that the attention architecture for input and output strings are the same but with different parameters. The final arguments embedding is generated by combining attention over input, output and the raw arguments embedding:

(15)

where and are parameters for combining.

Symbol Selector

The symbol selector uses the function embedding and the arguments embedding

generated by the program generator to select a proper function and the corresponding arguments of that function. The probability distribution

over atomic functions is produced by Equation (16), where is the matrix storing the representations of the atomic functions. The arguments embedding (representing the summary of the arguments) is decoded by a RNN [Cho2014Learning], which is conditioned on its previous output and , as shown in Equation (17). are hidden states of LSTM with . In this way, the sequence of arguments is generated by the RNN. The probability distribution of the -th argument at time over possible arguments, , is produced using Equation (18), where is the matrix storing the representations of possible arguments.

(16)
(17)
(18)

Training

We train the NPBE model end-to-end using input-output string pairs as well as the programs represented as a sequence of functions and corresponding arguments. Each program can transform to , where means the ’th training example. For every input-output pair we can generate a sequence of functions and every function has an argument list , where is the set of possible arguments, is the maximum number of arguments one function can take. In our implementation, .

The training is conducted by directly maximizing the log-likelihood of the correct program given :

(19)

where is the parameters of our model. Random Gaussian noise [neelakantan2015adding] is injected into the transformation embedding and the arguments embedding to improve the generalization ability and stability of NPBE.

Constant Type Constant
Delimiter “(space)”, “(newline)”, “,”, “.”, “\”, “@”, “:”, “;”, “_”, “=”, “-”, “/”
Integer 0, 1, 2, 3, -1, -2, -3
Special Symbol
Table 2: Constant symbols of our model.
Input-output Pair t Function Argument 1 Argument 2 Argument 3 Argument 4 Argument 5
Input string: 1 GetConstString
john@company.com 2 GetConstString
Output string: 3 Split x “@”
Hello john, have fun! 4 Select o3 0
5 Concatenate o1 o4 o2
Input string: 1 Split x “/”
17/apr/2016 2 Select o1 0
Output string: 3 Select o1 1
APR-17 4 ToUpper o3
5 Concatenate o4 “-” o2
Input string: 1 Split x “/”
/home/foo/file.cpp 2 Select o1 -1
Output string: 3 Split o2 “.”
file 4 Select o3 0
Table 3: Examples of input-output pairs for NPBE. The corresponding programs (from top to bottom) are: 1) Concatenate(“Hello ”, Select(Split(x, “@”), 0), “, have fun!”); 2) Concatenate(ToUpper(Select(Split(x, “/”), 1)), “-”, Select(Split(x, “/”), 0)); and 3) Select(Split(Select(Split(x, “/”), -1), “.”), 0). Note that GetConstString is a placeholder, and the first one is evaluated to “Hello ”, the second one is evaluated to “, have fun!”.

Experiments

Experimental Design

The NPBE model is required to induce a program consisting of a sequence of functions based on only one input-output string pair. In this section, we describe our evaluation of the NPBE model. In our experiments, we define 7 basic string manipulation functions and 1 null function as the atomic functions (Table 1). For simplicity, each program is allowed to be composed by 5 functions at most. We define a set of constant symbols (Table 2) from which our model can choose as arguments. The constant symbols include integers and delimiters. The integers are used by the function as index to an array of strings. The negative integers are used to access array elements from the tail. The design of integer symbols supports the access to an array of at most 7 elements. The delimiters are used by and to split a string or join an array of strings by a delimiter. We also define some special symbols. For example, the symbol refers to the input string, and are used to refer to the output of the first, second, third and fourth operation, respectively. The symbol indicates that no arguments is expected at the current position. Note that although in our experiments, we give some constraints to the functions and arguments in a program, our model can be easily extended to support new functions and arguments.

To obtain training data, we first generate programs at various levels of complexity according to 45 predefined tasks. A task is a sequence of function in a specific order, but the arguments of each function are not fixed. The reason for defining tasks is that we want the program generated by our model being syntactically correct and meaningful. The 45 tasks range from simple ones such as the concatenation of input to some constant string, to more complex ones comprising Split, Join, Select, ToUpper and Concatenate functions. For example, the task of Split, Join is to first split the input string by a delimiter, then join the split strings array using another delimiter. A program derived from this task could be Join(Split(x,“/”), “:”), which splits the input string according to the delimiter “/” and then joins the resulting substrings using the delimeter “:”. The average number of functions for accomplishing a task is 3.5.

For all the tasks, we generate a total number of around 69,000 programs for training. For each program we generate a random input string , which should be an valid input to the program . Next, we apply the program on by actually running the Python program implementing and get the output string . We constrain and to be at most 62 characters long. After that, we use as the input data to our model, and as the training target. The program is generated in such a way that if there are multiple programs that can result in the same given , only one specific program is chosen. Therefore, there is no ambiguity for the model to predict the desired program. Given the program , the input string is always generated dynamically and randomly to decrease overfitting. Table 3 gives some concrete input-output examples for our model.

To train NPBE, we choose RMSProp

[Tieleman and Hinton2012] as the optimizer and set the mini-batch size to 200. We set the dimensionality of the transformation embedding and the history embedding to 256, the function embedding to 16, and the arguments embedding to 64. The actual training process relies on an adaptive curriculum  [Reed and de Freitas2015]

in which the frequency of one specific task being trained is proportional to its error rates over test. Every 10 epochs we estimate the prediction errors. We use softmax with adequate temperature over error rates to sample the frequency of each task that will be trained during the next 10 epochs. So tasks with the higher error rates will be sampled more frequently than tasks with the lower error rates during the next 10 epochs.

The evaluation of NPBE is conducted to answer the following research questions:

Task LSTM LSTM-A NPBE-Top1 NPBE-Top3 NPBE-Top5
Case Change 100.0% 100.0% 100.0% 100.0% 100.0%
Duplicate Input String 9.4% 9.8% 99.1% 99.1% 99.1%
Case Change and Concatenate with Input String 9.4% 9.3% 99.9% 99.9% 99.9%
Concatenate with Constant 83.1% 67.5% 88.4% 88.4% 88.4%
Split, Select, Case Change 8.7% 12.8% 92.3% 96.4% 97.3%
Split, Join, Select, Case Change, Concatenate 0.1% 0.1% 92.4% 96.6% 97.8%
GetConstString, Split, Select, Case Change, Concatenate 2.9% 5.2% 81.2% 85.6% 86.7%
GetConstString, Split, Select, Concatenate 0.1% 0.1% 24.7% 54.4% 68.1%
Split, Select, Select, Concatenate 0.1% 0.1% 9.8% 46.4% 82.0%
Split, Select, Select, Case Change, Concatenate 0.1% 0.2% 35.8% 73.2% 94.0%
Split, Select, Split, Select 0.1% 0.1% 31.2% 64.3% 72.2%
Average over All 45 Tasks 20.4% 22.0% 74.1% 85.8% 91.0%
Table 4: Results of generating programs.
Task Seen Unseen
Split, Join 93.6% 93.9%
GetConstString, Split, Join, Concatenate 91.4% 91.5%
Split, Join, Concatenate 95.0% 94.9%
Split, Join, Select, Concatenate 90.8% 89.5%
Split, Select, Select, Concatenate 82.0% 82.1%
Average over 19 Tasks 87.6% 87.4%
Table 5: Results with seen and unseen program arguments.

RQ1: What is the accuracy of NPBE in generating programs?

In this RQ, we use randomly generated input-output strings to evaluate the accuracy of NPBE in generating programs. For example, given a random input-output pair: 25/11/16 and 25:11:16 (which does not appear in the training data), we would like to test if the correct program Join(Split(x, “/”), “:”)

can be still generated. To answer this RQ, we generate random input-output strings for each task 1000 times and apply the trained NPBE model. A program produced by NPBE is regarded correct only if the model predicts all the five functions (if less than five, padded with the

symbol) and all the arguments of the functions (also padded with the symbol) correctly (thus a total of positions). We also compare our model with the RNN encoder-decoder models [Cho2014Learning] implemented using LSTM and LSTM with attention mechanism, which all have similar total number of parameters to our model.

RQ2: Can NPBE generate programs with previously unseen arguments?

In this RQ, we test the generalization ability of NPBE. We evaluate our model using programs whose argument settings do not appear in the training set. For example, if the program Join(Split(x, “/”), “:”) appears in the training set, we would like to know if NPBE can work for the program Join(Split(x, “@”), “-”) , which does not appear in the training set. To answer this RQ, we design a test set consisting of around 19,000 programs with previously unseen arguments. The experiment is conducted on 19 selected tasks that have complex argument combinations (for simple tasks there are few arguments to choose so we skip them).

Experimental Results

RQ1: What is the accuracy of NPBE in generating programs?

Table 4 gives the evaluation results of NPBE on predicting programs. The average Top1 accuracy achieved by NPBE is 74.1%, which means that for 74.1% of input-output pairs in test, NPBE successfully generates the corresponding program. We found that the model prediction errors most likely to occur on the integer argument of because neural networks are not good at counting. So we also let the model to give 3 or 5 predictions when it tries to predict the integer arguments of . The average Top3 and Top5 accuracy results are 85.8% and 91.0%, which means that for 85.8% and 91.0% of input-output pairs in test, NPBE successfully returns the corresponding program within the top 3 and top 5 results respectively. The results show that the NPBE model can generate correct programs for most tasks.

As an example, given the input string “17/apr/2016” and output string “APR-17”, our model needs to induce a program comprising Split, Select, Select, Case Change, Concatenate. For this task, our model gives completely correct program in 35.8% cases. If we allow the model to give 3 or 5 predictions for the integer argument of , the accuracy is increased to 73.2% or 94.0%.

The results about LSTM and LSTM with attention mechanism (denoted to as LSTM-A) are also shown in Table 4. Note that the Top1, Top3 and Top5 accuracy for LSTM and LSTM-A are almost the same, so only the Top1 accuracy is reported. We found that the ordinary encoder-decoder model can solve the simplest tasks but cannot tackle harder tasks. The results show that NPBE significantly outperforms the ordinary encoder-decoder model.

RQ2: Can NPBE generate programs with previously unseen arguments?

We test the generalization ability of NPBE on the programs with previously unseen argument settings. The average Top5 accuracy results are given in Table 5. The results shows that for seen and unseen argument settings the accuracies achieved by our model have no big difference. For example, the task of Split, Join first splits an input string by a delimiter (such as “/”) and then concatenates the resulting substrings by the other delimiter (such as “:”). This task achieves 93.6% accuracy on the training set. For different arguments (e.g., first split the input string by “@” then join the resulting substrings by “-”) that do not exist in the training set, NPBE can still get 93.9% accuracy. The results show that NPBE can be generalized to unseen program arguments without over-fitting to particular argument combinations.

Discussions and Future Work

The intention behind NPBE is to make the model learn related features from input-output strings automatically and use the learned features to induce correct programs. The purpose of this paper is not to directly compete with the existing PBE systems. Instead, we show that the use of DNN can recognize features in string transformations and can learn accurate programs through input-output pairs.

Currently, NPBE cannot be generalized to completely unseen tasks (such as Split, Join, Join, Concatenate) that never appeared in the training set. In our future work, we will try to build the model that really “understands” the meaning of atomic functions to make it possible to generalize to the unseen tasks.

Conclusion

In this paper, we propose NPBE, a Programming by Example (PBE) model based on DNN. NPBE can induce string manipulation programs based on simple input-output pairs by inferring a composition of functions and corresponding arguments. We have shown that the novel use of DNN can be successfully applied to develop Programming By Example systems. Our work also explores the way of learning higher-order functions in deep learning, and is one step towards teaching DNN to generate computer programs.

References