1 Introduction
With the everincreasing role of computing, there has been a tremendous growth in interest in learning programming and computing skills. The computer science enrollments in universities has been growing steadily and it is becoming more and more challenging to meet this increasing demand. Recently, several online education initiatives such as edX, Coursera, and Udacity have started providing Massive Open Online Courses (MOOCs) to tackle this challenge of providing quality education at scale that is easily accessible to students worldwide. While there are several benefits of MOOCs – access to quality course material and instruction, cheaper than traditional university courses, learning at one’s own pace etc., there are also several drawbacks. One important drawback is that students enrolled in MOOCs do not typically get quality feedback for assignments compared to the feedback provided in traditional classroom settings since it is prohibitively expensive to hire enough instructors and teaching assistants to provide individual feedback to thousands of students. In this paper, we address the problem of providing automated feedback on syntax errors
in programming assignments using machine learning techniques.
The problem of providing feedback on programming assignments at scale has seen a lot of interest lately. These approaches can be categorized into two broad categories – peergrading [11] and automated grading techniques [18, 13]. In the peergrading approach, students rate and provide feedback on other student submissions based on a grading rubric. Some MOOCs have made this step mandatory for students before they get feedback for their own assignments. While peergrading has been shown to be effective for student learning, it also presents several challenges. First, it takes a long time (sometimes days) to get any useful feedback, and second, there is a potential for inaccuracies in feedback especially when students providing feedback themselves are struggling in learning the material.
The second approach of automated feedback generation aims to automatically provide feedback on student submissions. Most recent approaches for automated grading have focused on providing feedback on the functional correctness and style considerations of student programs. AutoProf [18] is a system for providing automated feedback on functional correctness of buggy student solutions. It uses constraintbased synthesis techniques to find minimum number of changes to an incorrect student submission such that it becomes functionally equivalent to a reference teacher implementation. The Codewebs system [13] is a search engine for coding assignments that allows for querying massive dataset of student submissions using "code phrases", which are subgraphs of AST in the form of subtrees, subforests, and contexts. A teacher provides feedback on a handful of submissions, which is then propagated to provide feedback on thousands of submissions by querying the dataset using code phrases.
While providing feedback on functional and stylistic elements of student submissions is important, a significant fraction of submissions (more than 34% in our dataset) comprise of syntax errors and providing feedback on syntactic errors has largely been unexplored. Many of the techniques described previously for automated grading can not provide feedback on syntactic errors since they inherently depend on analyzing the AST of the student submission, which is unfortunately not available for programs with syntax errors. Although compilers have improved a lot in finding the error location and in describing the syntax errors using better error messages, they can not provide feedback on how to fix these errors in general since they are developed for generalpurpose scenarios.
In this paper, we present a technique to automatically provide feedback on student programs with syntax errors leveraging the large dataset of correct student submissions. Our hypothesis is that even though there are thousands of student submissions, the diversity of solution strategies for a given problem is relatively small and the fixes to syntactic errors can be learnt from correct submissions. For a given programming problem, we use the set of (possibly functionally incorrect) student submissions without syntax errors to learn a sequence model of tokens, which is then used to hypothesize possible fixes to syntax errors in a student solution. Our system incorporates the suggested changes to the incorrect program and if the modified program passes the compiler syntax check, it provides those changes as possible fixes to the syntax error. We use a Recursive Neural Network (RNN) [17] to learn the token sequence model that can learn large contextual dependencies between tokens.
Our approach is inspired from the recent pioneering work on learning probabilistic models of source code from a large repository of code for many different applications [8, 14, 3, 2, 1, 4, 16]. Hindle et al. [8]
learn an ngram language model to capture the repetitiveness present in a code corpora and show that ngram models are effective at capturing the local regularities. They used this model for suggesting next tokens that was already quite effective as compared to the typebased stateoftheart IDE suggestions. Nguyen et al.
[14] enhanced this model for code autocompletion to also include semantic knowledge about the tokens (such as types) and the scope and dependencies amongst the tokens to consider global relationships amongst them. The Naturalize framework [1] learns an ngram language model for learning coding conventions and suggesting changes to increase the stylistic consistency of the code. More recently, some other probabilistic models such as conditional random fields and log bilinear context models have been presented for suggesting names for variables, methods, and classes [16, 2]. We also learn a language model to encode the set of valid token sequences, but our approach differs from these approaches in four key ways: i) our application of using a language model learnt from syntactically correct programs to fix syntax errors is novel and different from previous applications, ii) since we cannot obtain the Abstract Syntax Tree (AST) of these programs with syntax errors many of these techniques that use AST information for learning the language model are not applicable, iii) we learn recursive neural networks (RNN) that can capture more complex dependencies between tokens than ngram or logbilinear neural networks, and finally iv) instead of learning one language model for the complete code corpus, we learn individual RNN models for different programming assignments so that we can generate individualized repair feedback for different problems.We evaluate the effectiveness of our technique on student solutions for 5 programming problems taken from the Introduction to Programming class (6.00x) offered on the edX platform. Our technique can suggest tokens for completely fixing the syntax errors for of the submissions with syntax errors. Moreover, for an additional programs, our technique can suggest fixes that correct the first syntax error in the program but doesn’t fully correct the program because of the presence of multiple syntax errors.
This paper makes the following key contributions:

We formalize the problem of finding fixes for syntax errors in student submissions as a token sequence learning problem using the recurrent neural networks (RNN).

We present the SynFix algorithm to use the predicted token sequences for finding repairs to syntax errors that performs different code transformations including insertion and replacement of predicted sequences in a ranked order.

We evaluate the effectiveness of our system on more than student submissions from an online introductory programming class. Our system can completely correct the syntax errors in of the submissions and partially correct the errors in an additional of the submissions.
2 Motivating Examples
We now present a few examples of the different types of syntax errors we encounter in student submissions from our dataset and the repair corrections our system is able to generate using the token sequence model learnt from the syntacticallycorrect student submissions. The example corrections are shown in Figure 1 for the student submissions for the recPower problem taken from the Introduction to Programming MOOC (6.00x) on edX. The recPower problem asks students to write a recursive Python program to compute the value of given a real value base and an integer value exp as inputs.
Our syntax correction algorithm considers two types of parsing errors in Python programs: i) Syntax errors, and ii) Indentation errors. It uses the offset information provided by the Python compiler to locate the potential locations for syntax errors, and then uses the program statements from the beginning of the function to the error location as the prefix token sequence for performing the prediction. However, there are many cases such as the ones shown in Figure 1(c) where the compiler is not able to accurately find the exact offset location for the syntax error. In such cases, our algorithm ignores the tokens present in the error line and considers the prefix ending at the previous line. Using the prefix token sequence, the algorithm uses a neural network to perform the prediction of next tokens that are most likely to follow the prefix sequence, which are then either inserted at the error location or are used to replace the original token sequence at the error location.
A sample of syntax errors and the fixes generated by our algorithm (emphasized in boldface red font) based on inserting the predicted tokens from the offset location is shown in Figure 1(a). For correcting syntax errors in this class, our algorithm first queries the learnt language model to predict the next token sequence using the prefix token sequence ending at either the offset (error) location or one token before the offset location (Offset1). It then tries inserting the tokens from the predicted sequence in increasing order of length at the corresponding offset location until the syntax error in the line is fixed. The kinds of errors in this class typically include inserting unbalanced parenthesis, completing partial expressions (such as exp to exp1), adding syntactic tokens such as : after if and else expressions, etc.
Some example syntax errors that require replacing the original tokens in the incorrect program with the predicted tokens are shown in Figure 1(b). These errors typically include replacing an incorrect operator with another operator (such as replacing = with *, = in comparisons with ==), deleting additional mismatched parenthesis etc. Our algorithm performs a similar technique for generating prefix token sequences as in the case of previous class of syntax errors that require token insertion. The only difference is that instead of inserting the tokens from the predicted token sequence, it replaces the original tokens with the predicted tokens.
There are several cases in which the Python compiler isn’t able to accurately locate the error location offset. Some examples of these cases are shown in Figure 1(c) that include wrong spelling of keywords (retrun instead of return, f instead of if), wrong expression for the return statement etc. For fixing such syntax errors, our algorithm generates the prefix token sequence that ends at the last token of the line previous to the error line and ignores all the tokens occurring in the error line. It then queries the model to predict a token sequence that ends at a new line, and then replaces the error line completely with the predicted token sequence.
Finally, a sample of indentation errors is shown in Figure 1(d). These errors typically involve mistyped operators, using the wrong indentation after a function definition, conditional, and loop expressions etc. Our algorithm tries the same strategy as described previously for the class of syntax errors including inserting or replacing the tokens at the offset location.
An interesting point to note here is that currently our system predicts token sequences for fixing the syntax errors in the code that may or may not correspond to the correct semantic fix, i.e. the suggested fix would pass the parser check but may not compute the desired result (or may even throw a runtime exception). For example, in some cases such as the incorrect expression recurPower(base,exp=1), the top token sequence prediction results in the expression recurPower(base,exp11), which is a syntactically correct expression but does not result in computing the desired result of computing . Even for such cases, the generated fix can still provide some hints to the students about the correct usage of expressions for the corresponding program contexts. However, for many of the cases, the suggested repair for syntax correction also happens to correspond to the correct semantic fix as shown in Figure 1.
(a) SyntaxError  Insert Token  
⬇ if exp <= 0: return 1 return base * recPower(base, exp  1 @\listemph{)}@  ⬇ if exp <= 0: return 1 return base * recPower(base, exp@\listemph{1}@) 
⬇ if exp == 1: return base return base * (recPower(base, (exp  1))@\listemph{)}@  ⬇ if exp > 1: return base * recurPower(base, exp1) else@\listemph{:} return 1 
(b) SyntaxError  Replace Token  
⬇ if exp == 0: return 1 return base @\listrepl{=} \listemph{* } @recurPower(base,exp1)  ⬇ total = base if(exp==0): return total else: total*=base return total+recurPower(base,exp@\listrepl{=}\listemph{1}@1) 
⬇ if exp@\listrepl{=} \listemph{==}@0: return 1; else: return base*recurPower(base,exp1)  ⬇ if exp == 0: return 0 elif exp == 1: return base else: return base*recurPower(base,exp1)@\listrepl{)}@ 
(c) SyntaxError  Previous Line Insert  
⬇ @\listrepl{f exp == 1:}@ if exp == 1: return base return base * recurPower(base, (exp  1))  ⬇ if exp == 1: return base else: @\listrepl{retrun base * recurPower(base, exp  1)} @\listemph{return base * recurPower(base, exp  1)}@ 
⬇ if exp == 0: @\listrepl{return = exp + 1}@ return base else: return (base*recurPower(base,exp1))  ⬇ if exp == 0: return 1 if exp == 1: return base if exp > 1: @\listrepl{return exp = 1}@ return base * recurPower(base, exp1) else: return recurPower(base,exp1) 
(d) Indentation Error  Insert Token  
⬇ if exp == 0: @\listrepl{return 1}@ return 1 return base * recurPower(base,exp1)  ⬇ x = base while(exp > 0): x *= base @\listrepl{= 1}@ exp = 1 return base 
3 Approach
An overview of the workflow of our system is shown in Figure 2. For a given programming problem, we first use the set of all syntactically correct student submissions to train a neural network in the training phase for learning a token sequence model for all valid token sequences that is specific to the problem. We then use the SynFix algorithm to find small corrections to a student submission with syntax errors using the token sequences predicted from the learnt model. These corrections are then used for providing feedback in terms of potential fixes to the syntax errors. We now describe the two key phases in our workflow: i) the training phase, and ii) the SynFix algorithm.
(a) Training Phase  (b) Prediction Phase 
3.1 Neural Network Model
The simplest class of neural networks [6]
(also called convolutional networks) are feedforward neural networks and were the first type of artificial neural network devised. These networks accept a fixedsized vector as input (e.g. a bag of words model of a piece of text) and produce a fixedsize vector as output (e.g. the sentiment label for the text). In these networks, the information moves in only one forward direction from the input nodes to the hidden layers to the output layer. The feedforward networks have been found to be quite successful for a variety of classification tasks including sentiment analysis, image recognition, document relevance etc. However, there are two big limitations of these networks: 1) they only accept fixedsize input vectors and 2) they can perform only a fixed number of computational steps (defined by the fixed number of hidden layers).
To overcome these limitations, another class of neural networks called RNN (Recurrent Neural Network) have been devised that can operate over sequences of input and output vectors as opposed to fixedlength vectors. Moreover, in addition to the feedforward structure of the network, the output of a hidden layer is connected to its own input (cyclic paths) thus generating a feedback in the network. This feedback property of RNN gives them memory to retain information from previous steps and then use it for processing the current and future states. These additional capabilities make RNNs a very powerful computational model and can theoretically represent long context patterns. Although the RNNs are much more expressive than ngram and feed forward networks, the conventional wisdom has been that RNNs are more difficult to train. But with some recent algorithmic and computational advances, they have been shown to be efficiently learnable and have recently been used successfully for many tasks such as machine translation, video classification by frames, speech recognition etc. In this section, we describe how we model our problem of learning token sequences from syntactically correct student programs and then predicting token sequences for repairing incorrect programs using RNNs.
We first describe a brief overview of the computational model of a simple RNN with a single hidden layer. Consider an input sequence of length and an RNN with number of inputs, a single hidden layer with number of hidden units, and output units. Let denote the input at time step (encoded as a vector), denote the hidden state at time step t, denote the weight matrix corresponding to the weights on connections from input layer to hidden layer, be the weight matrix from hidden to hidden layer (recursive), and be the weight matrix from hidden to the output layer. A simple RNN architecture with number of inputs, number of hidden units in a single hidden layer, and output units is shown in Figure 4. The computation model of the RNN can be defined using the following equations:
The hidden state vector at time step
is computed by applying an activation function
(e.g. tanh or sigmoid) to a weighted sum of the input vector and the previous hidden state vector . The output vector is computed by applying the softmax function to the weighted state vector value .The artificial neurons are analogous to the neurons in human body which continuously receive electrochemical signal through their dendrites and when the sum of these signal surpass a certain threshold they send(fire) the electrochemcial signals through their axons. The hidden units and output units use a similar activation strategy to determine the state of the units during a particular time stamp. The hidden units take the weighted sum as input and map it to a value in the set (1,1) using the sigmoid function to model nonlinear activation relationships. The activation of a unit
in the hidden layer and output unit at time step is given by:During the training phase, RNN uses backpropagation through time(BPTT)
[20]to calculate the gradient and adjust the weights. BPTT is an extension of the backpropagation algorithm that takes into account the recursive nature of the hidden layers from one time step to the next. The loss function depends not only on the direct influence of the hidden layer but also on the values from hidden layer during the next time step. The loss function which is minimized during the training is the cross entropy error between the training output label and the predicted output label.
There are two common ways to feed each word of the sequence to the input layer of the RNN: 1) words are represented as one hot vector which is multiplied by the weight matrix and used for the forward pass, and 2) words are mapped to high dimensional vector and an embedding matrix is used to perform lookups. While training the network for learning sequence models, the target sequence is the input sequence shifted left by 1 since the model is trained to minimize the negative log likelihood of the predicted token and the next actual token in the sequence.
We now describe how we model our syntax repair problem for a given programming assignment using an RNN. We first use the syntactically correct student submissions to obtain the set of all valid token sequences. We then use a threshold frequency value to decide whether to relabel a token to a general IDENT token for handling rarely used tokens (such as infrequent variable/method names). A token is encoded into a fixedlength hot vector such that it contains for the index corresponding to the token index in the vocabulary and in all other places. The size of the hot vector is equal to the size of the training vocabulary.
In the training phase, we provide the token sequences to the input layer of the RNN and the input token sequence shifted left by 1 as the target token sequence to the output layer as shown in Figure 3
(a). The figure also shows the equations to compute the output probabilities for output tokens and the weights associated with connections from input to hidden layer, hidden to hidden layer, and hidden to output layer. After learning the network from the set of syntactically correct token sequences, we use the model to predict next token sequences given a prefix of the token sequence to the input layer as shown in Figure
3(b). The first output token is predicted at the output layer using the input token sequence. For predicting the next output token, the predicted token is used as the next input token in the input layer as shown in the figure.Long Short Term Memory networks (LSTM): LSTMs [9] are a special kind of RNN that are capable of learning longterm dependencies and have been shown to outperform general RNNs for a variety of tasks. In theory, RNNs are capable of handling any form of longterm dependencies on the past information because of the recursive connections. But, in practice, RNNs only perform well for cases where the gap between the required context information and the place where it’s needed is small. As the gap becomes larger, it becomes more difficult for RNNs to learn to connect the desired information. LSTMs are explicitly designed to avoid this longterm dependency issue with the RNNs. Instead of regular network units, the LSTMs contain LSTM blocks that intuitively determine whether the input is significant enough to remember, when it should forget the value, and when the value should be used for other layers. In the evaluation section, we also use different LSTM models to learn token sequences and compare their performance with the RNN models.
3.2 The SYNFIX Algorithm
The SynFix algorithm, shown in Algorithm. 1, takes as input a program (with syntax errors) and a token sequence model , and returns either a fixed program (if possible) or denoting that the program cannot be fixed. The algorithm first uses a parser to obtain the type of error err and the token location where the error occurs loc, and computes a prefix of the token sequence corresponding to the token sequence starting from the beginning of the program until the error token location loc. We use the notation to denote a subsequence of a sequence starting at index (inclusive) and ending at index (exclusive). The algorithm then queries the model to predict the token sequence of a constant length that is most likely to follow the prefix token sequence.
After obtaining the token sequence , the algorithm iteratively tries token sequences of increasing lengths ( until either inserting or replacing the token sequence at the error location results in a fixed program with no syntax errors. If the algorithm cannot find a token sequence that can fix the syntax errors in the program , the algorithm then creates another prefix of the original token sequence such that it ignores all previous tokens in the same line as that of the error token location. It then predicts another token sequence using the model for the new token sequence prefix, and selects a subsequence that ends at a new line token. Finally, the algorithm checks if replacing the line containing the error location with the predicted token sequence results in no syntax errors. If yes, it returns the fixed program . Otherwise, the algorithm returns denoting that no fix can be found for the syntax error in .
Example: Consider the Python program shown in Figure 5. The Python parser returns a syntax error in line 2 with the error offset corresponding to the location of the = token. The SynFix algorithm first constructs a prefix of the token sequence consisting of tokens from the start of the program to the error location such that . It then queries the learnt model to predict the most likely token sequence that can follow the input prefix sequence. Let us assume the value for length of predicted sequence is set to 3 and the model returns the predicted token sequence . The algorithm first tries to use the smaller prefixes of the predicted token sequence (in this case ’==’) to see if the syntax error can be fixed. It first tries to insert the predicted token sequence ’==’ in the original program but that results in the expression if exp == = 0: that still results in an error. It then tries to replace the original token sequence with the predicted token sequence, which results in the expression if exp == 0: that passes the parser check. The algorithm then returns the corresponding feedback of replacing the token ’=’ with the token ’==’ for fixing the syntax error.
Consider another incorrect Python attempt shown in Figure 6, where there is a syntax error at the token ’exp’ in line 3 retrun exp (wrong spelling of the return keyword). The algorithm similarly constructs the prefix token sequence as . For this prefix, the algorithm is not able to either insert or replace the predicted token sequence in the original program such that the syntax error is removed. The algorithm then removes all the tokens in the prefix token sequence that occur in the error line (in this case removes the tokens and ’retrun’), and then queries the model again to predict another token sequence with the updated prefix sequence such that the predicted sequence ends in a newline token. In this case, the algorithm predicts the token sequence corresponding to the statement return base that fixes the original syntax error.
4 Evaluation
We now present the evaluation of our system on Python submissions taken from the Introduction to Programming in Python course on the edX MOOC platform. The first question we investigate is whether it is possible to learn the RNN models for token sequences that can capture syntactically valid sequences. We then evaluate in how many cases our system can fix the syntax errors with the predicted sequences using different algorithmic choices in the SynFix algorithm. Finally, we also experiment with different RNN and LSTM configurations and the vocabulary threshold value to evaluate their effect on the final result.
4.1 Benchmarks
Our benchmark set consists of student submissions to five programming problems recurPower, iterPower, oddTuples, evalPoly, and compDeriv taken from the edX course. The recurPower problem asks students to write a recursive function that takes as input a number base and an integer exp, and computes the value . The iterPower problem has the same functional specification as the recurPower problem but asks students to write an itervative solution instead of a recursive solution. The oddTuples problem asks students to write a function that takes as input a tuple and returns another tuple that consists of every other element of . The evalPoly problem asks students to write a function to compute the value of a polynomial on a given point, where the coefficients of the polynomial are represented using a list of doubles. Finally, the compDeriv problem asks students to write a function to return a polynomial (represented as a list) that corresponds to the derivative of a given polynomial.
The number of student submissions for each problem in our evaluation set is shown in Table 1. In total our evaluation set consists of student submissions. The number of submissions for the evalPoly and compDeriv problems are relatively lesser than the number of submissions for the other problems. This is because these problems were still in the submission stage at the time we obtained the data snapshot from the edX platform. But this also gives us a measure to evaluate how well our technique performs when we have thousands of correct attempts in the training phase as opposed to only hundreds of correct attempts. Another interesting aspect to observe from the table is the fact that a large fraction of student submissions have syntax errors (). For each problem, we use the set of syntactically correct student submissions for learning the recurrent neural network and use the submissions with syntax errors as the test set to evaluate the learnt model.
Problem  Total  Syntactically  Syntax Errors 
Attempts  Correct  (Percentage)  
recurPower  10247  8176  2071 (20.21%) 
iterPower  11855  9194  2661 (22.45%) 
oddTuples  17057  8233  8824 (51.73%) 
evalPoly  1148  824  324 (28.22%) 
compDeriv  528  205  323 (61.18%) 
Total  40835  26632  14203 (34.78% ) 
4.2 Training Phase
During the training phase, we use all student submissions with no syntax errors for learning the token sequence model for a given problem. The student submissions are first tokenized into a set of sequence of tokens, which are then fed into the neural network for learning the token sequence model. The total number of tokens obtained from the syntactically correct student submissions for each problem in shown in Table 2. The table also presents the initial vocabulary size (the set of unique tokens in the student submissions) and the training vocabulary size, which is obtained by replacing all tokens whose occurrence frequency is under a threshold as IDENT. For our experiments, we use a threshold of .
To train the recurrent neural network, we used a learning rate of , a sequence length of , and a batch size of
. We use the batch gradient descent method with rmsprop (decay rate 0.97) to learn the edge weights, where the gradients were clipped at a threshold of
. As we will see later, we experiment with both the RNN and LSTM networks with 1 or 2 hidden layers and each with either 128 or 256 hidden units. These neural networks were trained for a maximum of epochs and the time required to train each neural network for different problems was on an average hours. The experiments were performed on a 1.4 GHz Intel Core i5 CPU with 4GB RAM.Problem  Correct  Total  Vocab  Training 
Attempts  Tokens  Size  Vocab  
recurPower  8176  338,958  191  117 
iterPower  9194  358,849  795  526 
oddTuples  8233  385,264  554  317 
evalPoly  824  55,370  373  276 
compDeriv  205  18,557  226  150 
4.3 Number of Corrected Submissions
We first present the overall results of our system in terms of how many student submissions are corrected using the predicted tokens in Table 3. Since our algorithm currently considers only one syntax error in a student submission and there are many submissions with multiple syntax errors, we also report the number of cases where the suggested correction fixes the first syntax error but the submission isn’t completely fixed because of other errors. We call this class of programs as Fixed(Other). In total, our system is able to provide suggestions to completely fix the syntax error in of the cases. Additionally, it is able to fix the first syntax error on a given error line without fixing other syntax errors on future lines in of the cases. The system isn’t able to provide any fix to the errors for the remaining of the submissions. The number of programs that are completely and partially fixed for each individual problem is also shown in the table.
We can first observe that even with relatively lesser number of total attempts for the evalPoly and compDeriv problems, the system is able to repair a significant number of syntax errors (40.43%+11.73% = 52.16%). We do get some improvement with larger number of correct submissions, but the RNNs are able to learn comprehensive language models even with few hundreds of correct submissions. Another interesting observation is that the system is able to completely fix the syntax errors for a large fraction of the problems except for the oddTuples problem. On further manual inspection, we found that this was the case because the student attempts for the oddTuples problem consisted of a large number of indentation errors. Moreover, there was also a large number of diverse solution strategies that were not represented in the training set.
Problem  Incorrect  Completely  Fixed 

Attempts  Fixed  (Other)  
recurPower  2071  1061 (51.23%)  281 (13.57%) 
iterPower  2661  1599 (60%)  276 (10.37%) 
oddTuples  8824  1575 (17.85%)  303 (3.43%) 
evalPoly  324  131 (40.43%)  38 (11.73%) 
compDeriv  323  135 (41.79%)  10 (3.09%) 
Total  14203  4501 (31.69%)  908 (6.39%) 
Problem  Incorrect  Completely Fixed  Fixed (Other Line)  
Offset  Offset1  PrevLine  Offset  Offset1  
Attempts  Insert  Replace  Insert  Replace  Insert  Replace  Insert  Replace  
recurPower  2071  48  48  467  708  856  16  15  215  310 
iterPower  2661  9  8  672  869  1206  191  214  241  360 
oddTuples  8824  29  28  464  622  1368  11  15  306  351 
evalPoly  324  7  5  44  47  108  15  15  30  40 
compDeriv  323  1  1  43  71  99  3  3  13  20 
Total  14203  94  90  1690  2371  3637  236  262  805  1081 
A more detailed breakdown of the number of submissions corrected or partially corrected by our system is shown in Table 4. The table reports the number of cases for which the syntax errors were fixed by the predicted token sequences using five different algorithmic choices: i) Offset: the prefix token sequence is constructed from the start of the program to the error token reported by the parser, ii) Offset1: the prefix token sequence is constructed upto one token before the error token, iii) PrevLine: the prefix token sequence is constructed upto the previous line and the error line is replaced by the predicted token sequence, (iv) Insert: the predicted token sequence is inserted at the Offset location, and (v) Replace: the original tokens starting at the Offset location are replaced by the predicted token sequence. As we can see, there is no one single choice that works better than every other choice. This motivates the need of the SynFix algorithm that tries all these different algorithmic choices in the order of first finding an insertion or a replacement fix using the predicted token sequences of increasing length and then using the Previous Line method. We use this ranking order over the choices to prefer smaller changes to the original program.
We can observe that the Previous Line choice performs the best for the completely fixed case. The reason for this is that the algorithm has more freedom to make larger changes to the original program by completely replacing the error line. It also sometimes lead to undesirable semantic changes, which may not correspond to student’s intuition. The Previous line changes are explored only after trying out the Insertion/Replace choices in the SynFix algorithm. The replacement of original tokens with the predicted token sequences performs a little better than the insertion choice. Another interesting observation is that generating the prefix token sequences for querying the language model that end at one token earlier than the error token (Offset1) performs a lot better than using prefix sequences that end at the error token (Offset). Finally, we observe that there are many student submissions that are fixed uniquely by each one of the 5 choices, and the algorithm therefore needs to consider all the choices.
There are about additional student submissions (amongst the of the submissions for which our technique can not generate any repair) for which we can provide some repair feedback by using the PrevLine choice. In these submissions, the replacement of the erroneous line with the predicted line causes the error to be fixed in the error line but does not necessarily make the program syntactically correct. We do not report these cases in the earlier tables as part of the partially fixed programs because often times the replaced line itself introduces new syntax errors in the submission. However, we believe that providing such fixes might still be beneficial to the students to provide them hints regarding the likely statements that should occur in place of the error line.
Another interesting point to note is that in some cases the number of partially fixed programs that are reported in Table 4 is more than the number of partially fixed programs in Table 3. For instance for the recurPower problem, the Offset1 and Replace combination can partially fix 310 submissions, whereas the number of partially fixed submissions reported in Table 3 is 281 for the recurPower problem. This is the case because some of those submissions get completely corrected using other algorithmic choices and are instead counted in the Completely Fixed category in Table 3.
4.4 Different Neural Network Baselines
In this section, we compare different baseline neural networks for learning the token sequence models and their respective effectiveness in correcting the syntax errors for the recurPower problem. In particular, we consider 6 baselines: (i) RNN(1,128), (ii) RNN(2,128), (iii) LSTM(1,128), (iv) LSTM(2,128), (v) LSTM(1,256), and (vii) LSTM(2,256), where the network NN(x,y) denotes a neural network (RNN or LSTM) consisting of x number of hidden layers with y number of units each. The results for the 6 baseline networks is presented in Table 5. There isn’t a large difference amongst the performance of different neural networks. The RNN(1,128) model fixes the largest number of student submissions completely (), and also has the best performance after including the partially corrected submissions. Interestingly, adding an additional hidden layer with more number of hidden units actually degrades the performance of the network on our dataset. Our hypothesis for this phenomenon is that the network with more hidden layers and more number of hidden units overfits the token sequences in the training phase and doesn’t generalize as well as the neural network with fewer hidden units. Another interesting observation is that RNNs perform slightly better in our scenario of fixing syntax errors as compared to the LSTMs.
Baseline  Total  Completely  Fixed  Total 

Network  Incorrect  Fixed  (Other)  Fixed 
RNN(1,128)  2071  1078  287  1365 
RNN(2,128)  2071  1062  267  1329 
RNN(2,256)  2071  990  302  1292 
LSTM(1,128)  2071  1028  294  1322 
LSTM(2,128)  2071  1061  281  1342 
LSTM(2,256)  2071  1045  293  1338 
4.5 Effect of Different Threshold values
We also experiment with the performance of our system by varying the threshold values for constructing the training vocabulary. The results for 3 different threshold values () are shown in Table 6. As the threshold increases, a larger number of tokens are now labeled as the IDENT token and thereby decreases the size of the training vocabulary. Without using any threshold value (t=1), the system fixes the fewest number of syntax errors for the recurPower problem. There are several incorrect submissions that cannot be corrected in this case because the learnt model performs poorly on prefix token sequences consisting of rarely used tokens (such as infrequent variable names). We can also observe that the threshold value of performs better than the threshold value of . One hypothesis for this phenomenon is that the neural network overgeneralizes some of the tokens to IDENT and loses the specific token information needed for fixing some syntax errors.
Threshold  Initial  Training  Completely  Fixed 

Values  Vocab  Vocab  Fixed  (Other) 
t=1  191  191  904  307 
t=4  191  117  1078  287 
t=8  191  86  1068  277 
5 Related Work
In this section, we describe several related work on learning language models for Big Code, automated code grading approaches, and machine learning based grading techniques for other domains.
Language Models for Big Code: The most closely related work to ours is that on learning language models of source code from a large code corpus and then using these models for several applications such as learning natural coding conventions, code suggestions and autocompletion, improving code style, suggesting variable and method names etc. Hindle et al. [8] use an ngram model to capture the regularity of local projectspecific contexts in software. They apply the learnt model to present suggestions for next tokens in the context of the Java language, and showed that the simple language model even without syntax or type information outperformed the stateoftheart Eclipse IDE token suggestion engine. Nguyen et al. [14] extended this syntactic ngram language model to also include semantic token annotations that describe the token type and their semantic role (such as variable, function call etc.) and combine it with topic modeling to obtain ngram topic model that also captures global technical concerns of the source files and pairwise association of code tokens. They apply this enhanced model for code suggestion and show that it improves the accuracy over syntactic ngram approach by 1868%. Allamanis et al. [3] applied this technique of learning ngram models on a much larger code corpus containing over 350 million lines of code, and showed that using a large corpus for training these ngram model can significantly increase their predictive capabilities.
Naturalize [1] is a languageindependent framework that uses the ngram language model to learn the coding convention and coding style from a project codebase, and suggests revisions to improve stylistic consistency. It was used to suggest natural identifier names and formatting conventions, and achieved 94% accuracy for suggesting identifier names as its top suggestion. The framework constructs an input snippet using the abstract syntax tree for the identifier for which the suggestions are needed and selects ngrams from this snippet containing the identifier. The ngrams containing that identifier are also selected from a training corpus (source codes from project whose conventions are to be adopted). The learnt ngram model is then used for scoring all the possible candidates selected from the training corpus to replace the identifier in the input codebase. Allamanis et al. [2]
recently presented a technique for suggesting method and class names from its body and methods respectively using a neural probabilistic language model. The input to these models is a sequence of finite length which is mapped to some lower dimension(D) vector (word representation). These vectors are then fed to hidden layer (non linear function) of the neural network. The final output layer is a softmax layer that evaluates the conditional probability of each word in the vocabulary given the input sequence. JSNice
[16] is a scalable prediction engine for predicting identifier names and type annotation of variables in JavaScript programs. The key idea in JSNice is to transform the input program into a representation that enables formulating the problem of inferring identifier names and type annotations as structured prediction with conditional random fields (CRFs). Given an input program, it first converts the program into a dependency network that captures relationships between program elements whose properties are to be predicted with elements whose properties are known. It then uses Maximum a Posteriori (MAP) inference to perform the structured prediction on the network.Our technique is inspired from these previous work in learning language models from a corpus of code, but differs from them in four key ways. First, our application of using the language model to compute fixes to syntax errors in student submissions is different from the applications considered by previous approaches such as suggestions for identifier, method, and class names, code autocompletion and suggestion, and coding convention inference. Second, we use a recursive neural network (RNN) to capture long context relationships amongst tokens in a token sequence unlike previous approaches that use ngram models, CRFs, and log bilinear neural networks. RNNs are traditionally considered to be hard to learn, but we leverage the recent advances in efficiently learning the RNNs for learning the token sequence models. Third, since we cannot obtain abstract syntax trees for programs with syntax errors, many of these techniques that depend on analyzing the ASTs are not applicable in our setting. Finally, we learn different RNN models for different programming assignments as opposed to learning a single model from the whole corpus. This allows us to find more accurate repairs for syntax errors that are problem dependent.
Automated Code Grading and Feedback:
The automated grading approaches can broadly be classified into two broad categories: 1) programming languages based approaches, and 2) machine learning based approaches. AutoProf
[18] is a system for providing automated feedback on functional correctness of introductory programming assignments. In addition to an incorrect student submission, it also takes as input a reference implementation specifying the intended functional behavior of the programming problem, and an error model consisting of rewrite rules corresponding to common mistakes that students make for the given problem. AutoProf uses constraintbased synthesis techniques [19] for finding minimum number of changes (guided by an error model) in the incorrect student submission to make it functionally equivalent to the reference implementation. Another approach [7] based on dynamic program analysis was recently presented for providing feedback on performance problems. It runs student submissions on a set of test cases to capture certain key values that occur during program executions, which are then used to identify the highlevel strategy used by the student submission and provide corresponding feedback.There has also been a lot of interest in the machine learning community on automated feedback generation and grading for programming problems. Huang et al. [10] present an approach to automatically cluster syntactically similar Matlab/Octave programs based on the AST tree edit distance using an efficient approach based on dynamic programming. Codewebs [13] creates a queryable index that allows for fast searches of code phrases into a massive dataset of student submissions. It accepts three form of queries: subtrees, subforests, and contexts that are subgraphs of an AST. A teacher provides detailed feedback on a few handful of labeled submissions, which is then propagated to thousands of student submissions by understanding the underlying structure present in the labeled submissions and querying the search engine. Another recently proposed approach uses neural networks to simultaneously embed both the precondition and postcondition spaces of a set of programs into a feature space, where programs are considered as linear maps on this space. The elements of the embedding matrix of a program are then used as code features for automatically propagating instructor feedback at scale [15].
The key difference between our technique and the previous programming languages and machine learning based techniques is that the previous techniques rely on the ability to generate the AST for the student submission to perform further analysis. However, with syntax errors, it is not possible to obtain such ASTs and that unfortunately limits these techniques to provide feedback on syntactic errors in student submissions.
Machine learning for Grading in Other domains: There have been similar automated grading systems developed for domains other than programming such as Mathematics and short answer questions. The Mathematical Language Processing (MLP) [12] framework leverages solutions from large number of learners to evaluate correctness of student solutions to open response Mathematical questions. It first converts an open response to a set of numerical features, which are then used for clustering to uncover structures of correct, partiallycorrect, and incorrect solutions. A teacher assigns a grade/feedback to one solution in a cluster, which is then propagated to all solutions in the cluster. Basu et al. [5] present an approach to train a similarity metric between short answer responses to United States Citizenship Exam, which is then used to group the responses into clusters and subclusters for powergrading. The main difference between our technique and these techniques is that we use RNNs to learn a language model for token sequences unlike machine learning based clustering approaches used by these techniques. Moreover, we focus on giving feedback on syntax errors whereas these techniques focus on semantic correctness of the student solutions.
6 Limitations and Future Work
There are several limitations in the presented algorithm that we would like to extend in future work. One limitation of our technique is that it currently handles only one syntax error in the student program. For example, consider the student submission in Figure 7. The SynFix algorithm is able to correctly fix the first indentation error in line 3 by inserting a tab token before the return token, but the updated program does not pass the compiler check because of another indentation error in line 5. We plan to extend our algorithm to also handle multiple syntax errors by automating the process of fixing the first syntax error found in the program using the SynFix algorithm and then calling the algorithm again recursively on the next error found in the updated program.
Our system currently only uses the prefix token sequence for suggesting the token sequence for repair. For the program shown in Figure 8, the algorithm suggests the fix corresponding to the expression exp==0. If the algorithm also took into account the token sequences following the error location such as return base, then it could have predicted a better fix corresponding to the token sequence exp == 1. There is a class of RNN called bidirectionalRNN that allows for predicting tokens based on both past and future contexts. We intend to investigate in future work if bidrectionalRNNs can be trained efficiently in our setting and if they can improve the fix coverage.
Another limitation of our technique is that it only checks for syntactic correctness while finding a repair candidate. There are some cases where the suggested sequence fixes the syntax errors but is semantically incorrect. We can try to solve this issue by adding a semantic check in the SynFix algorithm in addition to the syntactic parser check, and by allowing the algorithm to query the learnt model for multiple token sequence predictions until we obtain one that is a semantically correct fix as well.
Finally, there is an interesting research question on how to best translate the repairs generated by our technique into good pedagogical feedback, especially the cases for which the suggested fix is not semantically correct. Some syntax errors are simply typos such as mismatched parenthesis or missing operators, for which the feedback generated by our technique should be sufficient. But there are some class of syntax errors that point to deeper misconceptions in the student’s mind. Some examples of such errors include assigning to return keyword e.g. return = exp, performing an assignment inside a parameter value of a function call e.g. recurPower(base,exp=1), etc. We would like to build a system on top of our technique that can first distinguish small syntax errors from deeper misconception errors, and then translate the suggested repair fix accordingly so that students can learn the highlevel concepts for correctly understanding the language syntax.
7 Conclusion
In this paper, we presented a technique to use Recurrent neural networks (RNNs) to learn token sequence models for finding repairs to syntax errors in student programs. For a programming assignment, our technique first uses the set of all syntactically correct student submissions to train an RNN for learning the token sequence model, and then uses the trained model to predict token sequences for finding repairs for student submissions with syntax errors. Our technique takes inspiration from two emerging research areas: 1) Learning language models from big code, and 2) Efficient learning techniques for Recurrent neural networks. For our dataset of student attempts obtained from the edX platform, our technique can generate repairs for about 32% of submissions. We believe this technique can provide a basis for providing automated feedback on syntax errors to hundreds of thousands of students learning from online introductory programming courses that are being taught by edX, Coursera, and Udacity.
References
 [1] M. Allamanis, E. T. Barr, C. Bird, and C. A. Sutton. Learning natural coding conventions. In FSE, pages 281–293, 2014.
 [2] M. Allamanis, E. T. Barr, C. Bird, and C. A. Sutton. Suggesting accurate method and class names. In FSE, pages 38–49, 2015.
 [3] M. Allamanis and C. A. Sutton. Mining source code repositories at massive scale using language modeling. In MSR, pages 207–216, 2013.
 [4] M. Allamanis and C. A. Sutton. Mining idioms from source code. In FSE, pages 472–483, 2014.
 [5] S. Basu, C. Jacobs, and L. Vanderwende. Powergrading: a clustering approach to amplify human effort for short answer grading. TACL, 1:391–402, 2013.

[6]
C. M. Bishop.
Neural Networks for Pattern Recognition
. Oxford University Press, Inc., New York, NY, USA, 1995.  [7] S. Gulwani, I. Radicek, and F. Zuleger. Feedback generation for performance problems in introductory programming assignments. In FSE, pages 41–51, 2014.
 [8] A. Hindle, E. T. Barr, Z. Su, M. Gabel, and P. T. Devanbu. On the naturalness of software. In ICSE, pages 837–847, 2012.
 [9] S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural Comput., 9(8):1735–1780, Nov. 1997.
 [10] J. Huang, C. Piech, A. Nguyen, and L. J. Guibas. Syntactic and functional variability of a million code submissions in a machine learning MOOC. In AIED, 2013.
 [11] C. E. Kulkarni, P. W. Wei, H. Le, D. J. hao Chia, K. Papadopoulos, J. Cheng, D. Koller, and S. R. Klemmer. Peer and self assessment in massive online classes. ACM Trans. Comput.Hum. Interact., 20(6):33, 2013.
 [12] A. S. Lan, D. Vats, A. E. Waters, and R. G. Baraniuk. Mathematical language processing: Automatic grading and feedback for open response mathematical questions. In Learning@Scale, pages 167–176, 2015.
 [13] A. Nguyen, C. Piech, J. Huang, and L. J. Guibas. Codewebs: scalable homework search for massive open online programming courses. In WWW, pages 491–502, 2014.
 [14] T. T. Nguyen, A. T. Nguyen, H. A. Nguyen, and T. N. Nguyen. A statistical semantic language model for source code. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, 2013.
 [15] C. Piech, J. Huang, A. Nguyen, M. Phulsuksombati, M. Sahami, and L. J. Guibas. Learning program embeddings to propagate feedback on student code. In ICML, pages 1093–1102, 2015.
 [16] V. Raychev, M. T. Vechev, and A. Krause. Predicting program properties from "big code". In POPL, pages 111–124, 2015.
 [17] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter Learning Internal Representations by Error Propagation, pages 318–362. MIT Press, 1986.
 [18] R. Singh, S. Gulwani, and A. SolarLezama. Automated feedback generation for introductory programming assignments. In PLDI, pages 15–26, 2013.
 [19] A. SolarLezama, L. Tancau, R. Bodík, S. A. Seshia, and V. A. Saraswat. Combinatorial sketching for finite programs. In ASPLOS, pages 404–415, 2006.
 [20] P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, Oct 1990.
Comments
There are no comments yet.