Deep Learning for Bug-Localization in Student Programs

05/28/2019
by   Rahul Gupta, et al.
indian institute of science
0

Providing feedback is an integral part of teaching. Most open online courses on programming make use of automated grading systems to support programming assignments and give real-time feedback. These systems usually rely on test results to quantify the programs' functional correctness. They return failing tests to the students as feedback. However, students may find it difficult to debug their programs if they receive no hints about where the bug is and how to fix it. In this work, we present the first deep learning based technique that can localize bugs in a faulty program w.r.t. a failing test, without even running the program. At the heart of our technique is a novel tree convolutional neural network which is trained to predict whether a program passes or fails a given test. To localize the bugs, we analyze the trained network using a state-of-the-art neural prediction attribution technique and see which lines of the programs make it predict the test outcomes. Our experiments show that the proposed technique is generally more accurate than two state-of-the-art program-spectrum based and one syntactic difference based bug-localization baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/20/2018

Entropy Guided Spectrum Based Bug Localization Using Statistical Language Model

Locating bugs is challenging but one of the most important activities in...
08/01/2017

Bonsai: Synthesis-Based Reasoning for Type Systems

We describe algorithms for symbolic reasoning about executable models of...
07/19/2019

On Usefulness of the Deep-Learning-Based Bug Localization Models to Practitioners

Background: Developers spend a significant amount of time and efforts to...
08/25/2019

Testing Neural Programs

Deep neural networks have been increasingly used in software engineering...
11/16/2020

Automatically Repairing Programs Using Both Tests and Bug Reports

The success of automated program repair (APR) depends significantly on i...
04/22/2020

Towards Runtime Verification of Programmable Switches

Is it possible to patch software bugs in P4 programs without human invol...
10/15/2020

Program Equivalence for Assisted Grading of Functional Programs (Extended Version)

In courses that involve programming assignments, giving meaningful feedb...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automated grading systems for student programs both check the functional correctness of assignment submissions and provide real-time feedback to students. The feedback helps students learn from their mistakes, allowing them to revise and resubmit their work. In the current practice, automated grading systems rely on running the submissions against a test suite. The failing tests are returned to the students as feedback. However, students may find it difficult to debug their programs if they receive no hints about where the bug is and how to fix it. Although instructors may inspect the code and manually provide such hints in a traditional classroom setting, doing this in an online course with a large number of students is often infeasible. Therefore, our aim in this work is to develop an automated technique for generating feedback about the error locations corresponding to the failing tests. Such a technique benefits both instructors and students by allowing instructors to automatically generate hints for students without giving away the complete solution.

Input:

program, test_id

Output:

pass:, fail:
Figure 1: Overview of our technique. The buggy lines in the input programs are represented by dashed lines. We omit test ids from the input for brevity. The forward black arrows show the neural network prediction for each input. The thick gray arrows and ovals show the prediction attribution back to the buggy input programs leading to bug-localization.
Figure 2: Example to illustrate our technique. The program shown has bugs at lines and (shown in bold). The top- suspicious lines returned by our technique for this program are marked using a heat-map where darker color indicates higher suspiciousness score.

Towards this, we propose a deep learning based semantic bug-localization technique. While running a program against a test suite can detect the presence of bugs in the program, locating these bugs requires careful analysis of the program behavior. Our technique works in two phases. In the first phase, we train a novel tree convolutional neural network to predict whether or not a program passes a given test. The input to this network is a pair of a program and a test id. In the second phase, we query a state-of-the-art neural prediction attribution technique (Sundararajan et al., 2017) to find out which lines of a buggy program make the network predict the failures to localize the bugs. Figure 2 shows the overview of our technique. Figure 2 shows our technique in action on a buggy student submission w.r.t. a failed test where the test input is and the expected output is . For a given character , the programming task for this submission requires as output the character obtained by reversing its case if is an alphabet, or if it is a digit, otherwise c itself. The illustrated submission mishandles the second case in lines and and prints the same digit as the input.

Prediction attribution techniques are employed for attributing the prediction of a deep network to its input features. For example, for a multi-class image recognition network, a prediction attribution technique can identify the pixels associated with the given class in the given image, and thus can be used for object-localization in spite of being trained on image labels only. Our work introduces prediction attribution for semantic bug-localization in programs.

Bug-localization is an active field of research in software engineering (Wong et al., 2016). Spectrum-based bug localization approach (Jones et al., 2001; Abreu et al., 2006) instruments the programs to get the program traces corresponding to both the failing and the passing tests. In order to locate the bugs in a program, it compares the program statements that are executed in failing test runs against those executed in passing test runs. While spectrum-based bug-localization exploits correlations between executions of the same program on multiple tests, our technique exploits similarities/differences between the code of multiple programs w.r.t. the same test. In this way, the former is a dynamic program analysis approach, whereas the latter is a static program analysis approach.

The existing static approaches for bug-localization in student programs compare a buggy program with a reference implementation (Kaleeswaran et al., 2016; Kim et al., 2016)

. However, the buggy program and the reference implementation can use different variable names, constant values, and data and control structures, making it extremely difficult to distinguish bug inducing differences from the benign ones. Doing this requires the use of sophisticated program analysis techniques along with heuristics, which may not work for a different programming language. In contrast, our technique does not require any heuristics and therefore, is programming language agnostic.

Use of machine learning in software engineering research is not new. Several recent works proposed deep learning based techniques for automated syntactic error repair in student programs 

(Gupta et al., 2017; Bhatia et al., 2018; Ahmed et al., 2018; Gupta et al., 2019). Bugram (Wang et al., 2016) is a language model based bug-detection technique. Pu et al. Pu et al. (2016) propose a deep learning based technique for both syntactic and semantic error repair in small student programs. Their technique uses a brute-force, enumerative search for detecting and localizing bugs. Another recent work (Vasic et al., 2019) proposed a multi-headed LSTM pointer network for joint localization and repair of variable-misuse bugs. In contrast, ours is a semantic bug-localization technique, which learns to find the location of the buggy statements in a program. Unlike these approaches, our technique neither requires explicit bug-localization information for training nor does it perform a brute-force search. Instead, it trains a neural network to predict whether or not a program passes a test and analyses gradients of the trained network for bug-localization. Moreover, our technique is more general and works for all kinds of semantic bugs. To the best of our knowledge, we are the first to propose a general deep learning technique for semantic bug-localization in programs w.r.t. failing tests.

We train and evaluate our technique on C programs written by students for different programming tasks in an introductory programming course. The dataset comes with instructor written tests for these tasks. Thus, programs for each task are tested against about tests on an average. We compare our technique with three baselines which include two state-of-the-art, program-spectrum based techniques Jones et al. (2001); Abreu et al. (2006) and one syntactic difference based technique. Our experiments demonstrate that the proposed technique is more accurate than them in most cases. The main contributions of this work are as follows:

  1. It proposes a novel encoding of program ASTs and a tree convolutional neural network that allow efficient batch training for arbitrarily shaped trees.

  2. It presents the first deep learning based general technique for semantic bug-localization in programs. It also introduces prediction attribution in the context of programs.

  3. The proposed technique is evaluated on thousands of erroneous C programs with encouraging results. It successfully localized a wide variety of semantic bugs, including wrong conditionals, assignments, output formatting and memory allocation, among others. We provide several concrete examples of these in the appendix.

Both the dataset and the implementation of our technique will be open sourced.

2 Background: Prediction Attribution

Prediction attribution techniques attribute the prediction of a deep network to its input features. For our task of bug-localization, we use a state-of-the-art prediction attribution technique called integrated gradients (Sundararajan et al., 2017)

. This technique has been shown to be effective in domains as diverse as object recognition, medical imaging, question classification, and neural machine translation among others. In Section 

3.2, we explain how we leverage integrated gradients for bug-localization in programs. Here we describe this technique briefly. For more details, we refer our readers to the work of Sundararajan et al. Sundararajan et al. (2017).

When assigning credit for a prediction to a certain feature in the input, the absence of the feature is required as a baseline for comparing outcomes. This absence is modeled as a single baseline input on which the prediction of the neural network is “neutral" i.e., conveys a complete absence of signal. For example, in object recognition networks, the black image can be considered as a neutral baseline. Integrated gradients technique distributes the difference between the two outputs (corresponding to the input of interest and the baseline) to the individual input features.

More formally, for a deep network representing a function where input , and baseline ; integrated gradients are defined as the path integral of the gradients along the straight-line path from the baseline to the input . For and , the integrated gradient along the dimension is defined as follows:

If is differentiable almost everywhere, then

If the baseline is chosen in a way such that the prediction at the baseline is near zero (), then resulting attributions have an interpretation that ignores the baseline and amounts to distributing the output to the individual input features. The integrated gradients can be efficiently approximated via summing the gradients at points occurring at sufficiently small intervals along the straight-line path from the baseline to the input :

where is the number of steps in the Riemman approximation of the integral of integrated gradients (Sundararajan et al., 2017).

3 Technical Details

Decl:even

2

TypeDecl:even

4

IdentifierType:int

3

UnaryOp:!

5

BinaryOp:%

6

ID:num

7

Constant:int,2

1
(a)

(b)
Figure 3: 2(a): AST of the code snippet: int even=!(num%2). For each node, its visiting order is also shown in the breadth-first traversal of the AST. 2(b): 2-d matrix representation of the AST shown in Figure 2(a). The matrix shows node positions instead of the nodes themselves to avoid clutter. For example, the last row corresponds to the highlighted subtree from Figure 2(a).

We divide our bug-localization approach in two phases. In the first phase, we train a neural network to predict whether or not a program passes the test corresponding to a given test id. This is essentially a classification problem with two inputs: program text and a test id, where we have multiple passing and failing programs (which map to different class labels) for each test id. Though different programs are used at test time, they share test ids with the training examples. In the second phase, we perform bug-localization by identifying the patterns that help the neural network in correct classification. Note that the neural network is only given the test id along with the program as input. It is not provided with the actual inputs and the expected outputs of the tests as it does not know how to execute the program. The learning is based only on the syntactic patterns present or absent in the programs.

3.1 Phase : Tree Convolutional Neural Network for Test Success/Failure Prediction

Use of machine learning in software engineering research is not new. Many works exist which use machine learning algorithms on programs for different tasks such as code completion, automated repair of syntactic errors, and program synthesis among others. Details of many such techniques can be found in a survey (Allamanis et al., 2018)

and the references therein. Most of these works use recurrent neural networks (RNNs) 

(Gupta et al., 2017; Vasic et al., 2019) and convolutional neural networks (CNNs) (Mou et al., 2016). Our initial experiments with multiple variants of both RNNs and CNNs suggested the latter to be better suited for our task. CNNs are designed to capture spatial neighborhood information in data and are generally used with inputs having grid-like structure such as images (Goodfellow et al., 2016). On their own, they may fail to capture the hierarchical structures present in programs. To address this, Mou et al. Mou et al. (2016) proposed tree based CNNs. However, the design of their custom filter is difficult to implement and train as it does not allow batch computation over variable-sized programs and trees. Therefore, we propose a novel tree convolutional network which uses specialized program encoding and convolution filters to capture the tree structural information present in programs, allowing us to not only batch variable-sized programs but also use the well optimized CNN implementations provided by the popular deep learning frameworks of the day.

3.1.1 Program Encoding

Programs have rich structural information, which is explicitly represented by their abstract syntax trees (ASTs). Figure 2(a) shows the AST of the following code snippet:

int even=!(num % 2);

Each node in an AST represents an abstract construct in the program source code. We encode programs in such a way that their explicit tree structural information is captured by CNNs easily. To do this, we convert the AST of a program into an adjacency list like representation as follows. First, we flatten the tree in the breadth-first traversal. In the second step, each non-terminal node in this flattened tree is replaced by a list with the first element in the list being the node itself and the rest of the elements are its direct children nodes ordered from left to right. As terminal nodes do not hold any structure by themselves, we discard them at this step.

Next, we convert this representation into a 2-dimensional matrix for feeding it to a CNN. We do that by padding subtrees with dummy nodes to make them of equal size across all programs in our dataset. We also pad the programs with dummy subtrees to make each program have the same number of subtrees. This way each program is encoded into a 2-dimensional matrix of size

, where and denote the maximum number of subtrees and the maximum number of nodes in a depth- subtree across all programs in our dataset, respectively. Figure 2(b) shows the 2-dimensional matrix representation for the AST shown in Figure 2(a) where indicates padding. In this representation, each row of the encoded matrix corresponds to a depth- subtree in the program AST. Moreover, contiguous subsets of rows of an encoded matrix correspond to larger subtrees in the program AST. Note that this encoding ensures that the tree structural information of a program is captured by the spatial neighborhood of elements within a row of its encoded matrix; allowing us to use CNNs with simple convolution filters which can extract features from complete subtrees at a time and not just from any random subset of nodes.

Next, we create a shared vocabulary across all program ASTs in our dataset. The vocabulary retains all the AST nodes such as non-terminals, keywords, and literals except for the identifiers (variable and function names) without any modification. Identifiers are included in the vocabulary after normalization. This is done by creating a small set of placeholders and mapping each distinct identifier in a program to a unique placeholder in our set. The size of the placeholder set is kept large enough to allow this normalization for every program in our dataset. This transformation prevents the identifiers from introducing rarely used tokens in the vocabulary without changing the semantics of the program they appear in.

3.1.2 Neural Network Architecture

Figure 4: Tree convolution over the encoded program AST input. Variable represents the maximum number of nodes in a depth- subtree across all programs in our dataset.

Given a pair of a program and a test id as input, our learning task is to predict the binary test result i.e., failure or success of the input program on the test corresponding to the given test id. To do this, we first encode the input program into its 2-d matrix representation as discussed above. Each element (node) of the matrix is then replaced by its index in the shared vocabulary which is then embedded into a

-dimensional dense vector using an embedding layer. The output 3-d matrix is then passed through a convolutional neural network to compute a dense representation of the input program as shown in Figure 

4

. The first convolutional layer of our network extracts features from a single node at a time. The output is then passed through two independent convolutional layers. The first of these two layers applies filters overlapping one whole row at a time with a stride of one row. The filter for second overlaps three rows at a time with a stride of three rows. As discussed earlier, each row of the program encoding matrix represents a depth-

subtree of the program AST. This makes the last two convolutional layers detect features for one subtree and three subtrees at a time.

The resulting features from both these layers are then concatenated to get the program embedding. Next, we embed the test id into a -dimensional dense vector using another embedding layer. It is then concatenated with the program embedding and passed through three fully connected non-linear layers to generate the binary result prediction. We call our model tree convolutional neural network (TCNN).

3.2 Phase 2: Prediction Attribution for Bug-Localization

For a pair of a buggy program and a test id, such that the program fails the test, our aim is to localize the buggy line(s) in the program that are responsible for this failure. If our trained model predicts the failure for such a pair correctly, then we can query a prediction attribution technique to assign the blame for the prediction to the input program features to find these buggy line(s). As discussed earlier, we use the integrated gradients (Sundararajan et al., 2017) technique for this purpose.

In order to attribute the prediction of a network to its input, this technique requires a neutral baseline which conveys the complete absence of signal. Sundararajan et al. suggest black images for object recognition networks and all-zero input embedding vectors for text based networks as baselines. However, using an all-zero input embedding vector as a baseline for all the buggy programs does not work for our task. Instead, we propose to use a correct program similar111denotes both syntactic and semantic similarity. to the input buggy program as a baseline for attribution. This works because the correct program does not have any patterns which cause bugs and hence, conveys the complete absence of the signal required for the network to predict the failure. Furthermore, we are interested in capturing only those changes which introduce bugs in the input program and not the benign changes which do not introduce any bugs. This justifies the use of a similar program as a baseline. Using a very different correct program as a baseline would unnecessarily distribute the output difference to benign changes which would lead to the undesirable outcome of localizing them as bugs.

For a buggy submission by a student, we find the baseline from the set of correct submissions by other students, as follows. First, we compute the embeddings of all the correct programs using our tree CNN. Then we compute the cosine distance of these embeddings from the buggy program embedding. The correct program with the minimum cosine distance is used as the baseline for attribution.

The integrated gradient technique assigns credit values to each element of an embedded AST node, which are averaged to get the credit value for that node. As bug-localization techniques usually localize bugs to the program lines, we further average the credit values for nodes to get the credit values for each line in the program. The nodes corresponding to a line are identified using the parser used to generate the ASTs. We interpret the credit value for a line as the suspiciousness scores for that line to be buggy. Finally, we return a ranked list of program lines sorted in decreasing order of their suspiciousness scores.

4 Experiments

4.1 Dataset

Avg. programs Avg. Avg. Avg.
per task tests submissions lines per
Correct Buggy per task per student submission
Table 1: Dataset statistics.

For training and evaluation, we use student written C programs for different programming tasks in an introductory programming course. The problem statements of these tasks are quite diverse requiring students to implement concepts such as simple integer arithmetic, array and string operations, backtracking, and dynamic programming. Solving these require various language constructs such as scalar and multi-dimensional arrays variables, conditionals, nested loops, recursion and functions. We list the problem statements for some of the programming tasks in the appendix. Our dataset comes with the instructor provided test suite for each programming task. The dataset contains a total of tests across these programming tasks. Note that we work only with the tests written by the instructors of this course and do not write or generate any additional tests. A program is tested only against tests from the same programming task it is written for. This is assumed in the discussion henceforth. Each program in our dataset contains about lines of code on average.

Table 1 shows the dataset statistics. We have two classes of programs in our dataset, (1) programs which pass all the tests (henceforth, correct programs), and (2) programs which fail and pass at least one test each (henceforth, buggy programs). We observed that programs which do not pass even a single test to be almost entirely incorrect. Such program do not benefit from bug-localization and hence we discard them. Now for each test, we take maximum programs that pass it (including buggy programs that fail on other tests) and maximum programs that fail it. Next, we generate subtrees for each of these programs using pycparser Bendersky . In order to remove unusually bigger programs, we discard the last one percentile of these programs arranged in the increasing order of their size. Across all the remaining programs, and come out to be and , respectively. Pairing these programs with their corresponding test ids results in a dataset with around examples. We set aside of this dataset for validation and use the rest for training.

Evaluation Dataset

Evaluating bug-localization accuracy on a program requires the ground truth in the form of bug locations in that program. As the programs in our dataset come without the ground truth, we try to find that automatically by comparing the buggy programs to their corrected versions. For this, we use Python’s difflib to find line-wise differences, the ‘diff’, between a buggy and a correct program. We do this for every pair of buggy and correct programs that are solutions to the same programming task and are written by the same student. Note that this is only done to find the ground truth for evaluation. Our technique does not use the corrected version of an incorrect program written by the same student. We include a buggy program in our evaluation set only if we can find a correct program with which its diff is smaller than five lines. This gives us buggy programs containing buggy lines in our evaluation set. Pairing these programs with their corresponding failing test ids results in pairs. We ensure that these pairs do not overlap with the training data.

In order to identify the buggy lines from the diff, we first categorize each patch appearing in the diff into three categories: (1) insertion of correct line(s), (2) deletion of buggy line(s), and (3) replacement of buggy line(s) with correct line(s). Next, we mark all the lines appearing in the deletion and replacement categories as buggy. For the lines in the first category, we mark their preceding line as buggy. For a program with a single buggy line, it is obvious that all the failing tests are caused by that line. However, for the programs with multiple buggy lines, we need to figure out the buggy line(s) corresponding to each failing test. We do this as follows.

For a buggy program and its diff with the correct implementation, first we create all possible partially corrected versions of the buggy program by applying all non-trivial subsets of the diff generated patches. Next, we run partially corrected program versions against the test suite and for each program, mark the buggy line(s) excluded from the partial fix as the potential cause for all the tests that the program fails. Next, we go over these partially fixed programs in the increasing order of the number of buggy lines they have. For each program we mark the buggy lines in that program as a cause for a failing test if a program having a subset of buggy lines does not also fail that test. This procedure is similar in spirit to delta debugging approach (Zeller, 1999), which uses unit tests to narrow down bug causing lines while removing lines that are not responsible for reproducing the bug.

4.2 Training

We implement our technique in Keras 

(Chollet et al., 2015)

using Tensorflow 

(Abadi et al., 2016) as back-end. We find a suitable configuration of the tree convolutional neural network through experimentation. Our vocabulary has tokens after identifier-name normalization. We train our model using the ADAM optimizer (Kingma and Ba, 2015) with a learning rate of . We train our model for epochs, which takes about hour on an Intel(R) Xeon(R) Gold 6126 machine, clocked at 2.60GHz with GB of RAM and equipped with an NVIDIA Tesla P100 GPU accelerator. Our model achieves the training and validation accuracies of and , respectively. We use steps for approximating the integrated gradient for bug-localization.

4.3 Evaluation

Technique & Evaluation Localization Bug-localization result
Configuration metric queries Top-10 Top-5 Top-1
Proposed technique pairs () () ()
Lines () () ()
Programs 1164 () () ()
Tarantula-1 Programs () () ()
Ochiai-1 () () ()
Tarantula-* Programs () () ()
Ochiai-* () 835 () 385 ()
Diff-based Programs () () (%)
NBL rank () () ()
Table 2: Comparison of the proposed technique with three baselines. Top- denotes the number of buggy lines reported in their decreasing order of suspiciousness score.

In the first phase, we use the trained model to predict the success/failure of each example pair of a buggy program and a test id, from the evaluation dataset. On these pairs, the classification accuracy of the trained model is only . This is much lower than its validation accuracy of . The explanation for such a big difference lies in the way the two datasets are constructed. The pairs in the validation set are chosen randomly from the complete dataset and therefore their distribution is similar to the pairs in the training dataset. Also, both these datasets consist of pairs associated with both success and failure classes. On the other hand, recall that the evaluation set contains pairs associated only with the failure class. Furthermore, the buggy programs in these pairs are chosen because we could find their corrected versions with a reasonably small syntactic difference between them. Thus, the relatively lower accuracy of our model on the evaluation set stems from the fact that its distribution is different from that of training and validation sets and is not actually a limitation of the model. This is also evident from the fact that the test accuracy increases to about if the evaluation set includes pairs associated with both success and failure classes instead of just failure class for the the same programs in the evaluation set.

In the second phase, we query the attribution technique for bug-localization of those pairs of programs and tests for which the model prediction in the earlier phase is correct. We evaluate the bug-localization performance of our technique on the following three metrics: () the number of pairs for which at least one of the lines responsible for the program failing the test is localized, () the number of programs for which at least one buggy line is localized, and () the number of buggy lines localized across all programs. As shown in Table 2, out of the programs for which the localization query is made, our technique is able to localize at least one bug for more than of them, when reporting top- suspicious lines. It also proved to be effective in bug-localization for programs having multiple bugs. Out of such programs in the evaluation set, our technique localized more than one bug for programs, when reporting top- suspicious lines.

Comparison with Baselines

In Table 2, we compare our technique with three baselines including two state-of-the-art program-spectrum based techniques, namely Tarantula (Jones et al., 2001) and Ochiai (Abreu et al., 2006)

and one syntactic difference based approach. This comparison is made only on those pairs in the evaluation set which our model classifies correctly. The metric used for this comparison is the number of programs for which at least one bug is localized. The other two metrics, namely, number of

pairs and buggy lines localized also yield similar results.

A program-spectrum records which components of a program are covered, and which are not during an execution. Tarantula and Ochiai compare the program-spectra corresponding to all the failing tests to that of all the passing tests. The only difference between them is that they use different formulae to calculate the suspiciousness scores of program statements. As our technique performs bug-localization w.r.t. one failing test at a time, we restrict these techniques to use only one failing test at a time for a fair comparison. We use them in two configurations. In the first, they are restricted to just one passing test, chosen randomly and in the second, they use all the passing tests. These configurations are denoted by suffixing ‘-’ and ‘-*’ to the names of the techniques, respectively. The syntactic difference based approach is the same as the one described earlier for finding the actual bug locations (ground truth) for the programs in the evaluation set. The only difference is that now the reference implementation for a buggy program submitted by a student is searched within the set of correct programs submitted by other students. This is done for both this approach and our technique.

It can be seen that our technique outperforms both Tarantula- and Ochiai-  (when they use only one passing test) in top- results for all values of . However, with the extra benefit of using all passing tests, they both outperform our technique in top- results. Nevertheless, even in this scenario, our technique outperforms both of them in top- results. In top- results, our technique outperforms Tarantula-*, while matching the performance of Ochiai-*. our technique also completely outperforms the syntactic difference based technique with a high margin.

Qualitative Evaluation

In our analysis, we found that the proposed technique localized almost all kinds of bugs appearing in the evaluation set programs. Some of these include wrong assignments, conditions, for-loops, memory allocations, output formatting, incorrectly reading program inputs, and missing code among others. We provide a number of concrete examples illustrating our bug-localization results in the appendix.

Our technique compares a buggy program to a closely similar correct program using neural attribution. This comparison is designed to search for the bug-causing differences in the buggy program while ignoring the benign ones. As our technique is not engineered to target a predefined set of patterns when searching for differences, in principle, it should be able to localize all kinds of bugs. Therefore, we call our technique a general semantic bug-localization technique.

4.4 Faster Search for Baseline Programs through Clustering

As discussed earlier, we calculate the cosine distance between the embeddings of a given buggy program with all correct programs. When the number of correct programs is large, it can be expensive to search through all of them for each buggy program. To mitigate this, we cluster all the programs first using the K-means clustering algorithm on their embeddings. Now for each buggy program, we search for the baseline only within the set of correct programs present in its cluster. Note that both clustering and search are performed on programs from the same programming task. We set the number of clusters to

. We arrive at this value through experimentation. Our results show that clustering affects the bug-localization accuracy by less than in every metric while reducing the cost of baseline search by a factor of .

5 Related Work

We discussed spectrum and diff based bug-localization approaches for student programs earlier in Section 1. We also compared our technique with them empirically in the previous section. In Section 1, we gave an overview of the recent developments in learning based software engineering research as well. In this section, we review two more approaches for feedback generation for student programs.

Program repair techniques are extensively used for feedback generation for logical errors in student programs. AutoGrader (Singh et al., 2013) takes as input a buggy student program, along with a reference solution and a set of potential corrections in the form of expression rewrite rules and searches for a set of minimal corrections using program synthesis. Refazer (Rolim et al., 2017) learns programs transformations from example code edits made by students using a hand designed domain specific language, and then uses these transformations to repair buggy student submissions. Unlike these approaches, our approach is completely automatic and requires no inputs from the instructor. Most program repair techniques first use an off-the-shelf bug-localization technique to get a list of potential buggy statements. On these statements, the actual repair is performed. We believe that our technique can also be fruitfully integrated into such program repair techniques.

Another common approach to feedback generation is program clustering where student submissions having similar features are grouped together in clusters. The clusters are typically used in the following two ways: (1) the feedback is generated manually for a representative program in each cluster and then customized to other members of the cluster automatically (Nguyen et al., 2014; Piech et al., 2015; Glassman et al., 2015), and (2) for a buggy program, a reference implementation is selected from the same cluster, which is then compared to the buggy program to generate a repair hint (Kaleeswaran et al., 2016; Gulwani et al., 2018; Wang et al., 2018; Sharma et al., 2018). The clusters are created either using heuristics based on program analysis techniques (Glassman et al., 2015; Kaleeswaran et al., 2016; Gulwani et al., 2018; Wang et al., 2018; Sharma et al., 2018) or using program execution on a set of inputs (Nguyen et al., 2014; Piech et al., 2015). Unlike these approaches, we cluster programs using k-means clustering algorithm on the embeddings learned on program ASTs, which does not require any heuristics and therefore, is programming language agnostic.

6 Conclusions and Future Work

We present the first deep learning based general technique for semantic bug-localization in student programs w.r.t. failing tests. At the heart of our technique is a novel tree convolution neural network which is trained to predict whether or not a program passes a given test. Once trained, we use a state-of-the-art neural prediction attribution technique to find out which lines of the programs make the network predict the failures to localize the bugs. We compared our technique with three baseline including one static and two state-of-the-art dynamic bug-localization techniques. Our experiments demonstrate that our technique outperforms all three baselines in most of the cases.

We evaluate our technique only on student programs. It will be an interesting future work to use it for arbitrary programs in the context of regression testing (Yoo and Harman, 2012), i.e., to localize bugs in a program w.r.t. the failing tests which were passing with the earlier version(s) of that program. Our technique is programming language agnostic and has been evaluated on C programs. In future, we will experiment with other programming languages as well. We also plan to extend this work to achieve neural program repair. While our bug-localization technique required only a discriminative network, a neural program repair technique would require a generative model to predict the patches for fixing bugs. It will be interesting to see if our tree convolution neural network can be adapted to do generative modeling of patches as well.

Acknowledgments

We thank Sonata Software Ltd. for partially funding this work. We also thank the anonymous reviewers for their helpful feedback on the first version of the paper.

References

  • Abadi et al. [2016] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. TensorFlow: A system for large-scale machine learning. In OSDI, pages 265–283, 2016.
  • Abreu et al. [2006] Rui Abreu, Peter Zoeteweij, and Arjan J. C. van Gemund. An evaluation of similarity coefficients for software fault localization. In PRDC, pages 39–46, 2006.
  • Ahmed et al. [2018] Umair Z Ahmed, Pawan Kumar, Amey Karkare, Purushottam Kar, and Sumit Gulwani. Compilation error repair: for the student programs, from the student programs. In ICSE: SEET, pages 78–87, 2018.
  • Allamanis et al. [2018] Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. A survey of machine learning for big code and naturalness. ACM Computing Surveys, 51(4):81, 2018.
  • [5] Eli Bendersky. pycparser. https://github.com/eliben/pycparser.
  • Bhatia et al. [2018] Sahil Bhatia, Pushmeet Kohli, and Rishabh Singh. Neuro-symbolic program corrector for introductory programming assignments. In ICSE, pages 60–70, 2018.
  • Chollet et al. [2015] François Chollet et al. Keras. https://keras.io/, 2015.
  • Glassman et al. [2015] Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. Overcode: Visualizing variation in student solutions to programming problems at scale. TOCHI, 22(2):7, 2015.
  • Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
  • Gulwani et al. [2018] Sumit Gulwani, Ivan Radiček, and Florian Zuleger. Automated clustering and program repair for introductory programming assignments. In PLDI, pages 465–480, 2018.
  • Gupta et al. [2017] Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. DeepFix: Fixing common c language errors by deep learning. In AAAI, pages 1345–1351, 2017.
  • Gupta et al. [2019] Rahul Gupta, Aditya Kanade, and Shirish Shevade.

    Deep reinforcement learning for programming language correction.

    In AAAI, 2019.
  • Jones et al. [2001] James A Jones, Mary Jean Harrold, and John T Stasko. Visualization for fault localization. In ICSE Workshop on Software Visualization, 2001.
  • Kaleeswaran et al. [2016] Shalini Kaleeswaran, Anirudh Santhiar, Aditya Kanade, and Sumit Gulwani. Semi-supervised verified feedback generation. In FSE, pages 739–750, 2016.
  • Kim et al. [2016] Dohyeong Kim, Yonghwi Kwon, Peng Liu, I Luk Kim, David Mitchel Perry, Xiangyu Zhang, and Gustavo Rodriguez-Rivera. Apex: automatic programming assignment error explanation. ACM SIGPLAN Notices, 51(10):311–327, 2016.
  • Kingma and Ba [2015] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
  • Mou et al. [2016] Lili Mou, Ge Li, Lu Zhang, Tao Wang, and Zhi Jin. Convolutional neural networks over tree structures for programming language processing. In AAAI, page 4, 2016.
  • Nguyen et al. [2014] Andy Nguyen, Christopher Piech, Jonathan Huang, and Leonidas Guibas. Codewebs: scalable homework search for massive open online programming courses. In WWW, pages 491–502, 2014.
  • Piech et al. [2015] Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas Guibas. Learning program embeddings to propagate feedback on student code. In ICML, pages 1093–1102, 2015.
  • Pu et al. [2016] Yewen Pu, Karthik Narasimhan, Armando Solar-Lezama, and Regina Barzilay. sk_p: a neural program corrector for moocs. In SPLASH Companion, pages 39–40, 2016.
  • Rolim et al. [2017] Reudismam Rolim, Gustavo Soares, Loris D’Antoni, Oleksandr Polozov, Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, and Björn Hartmann. Learning syntactic program transformations from examples. In ICSE, pages 404–415, 2017.
  • Sharma et al. [2018] Saksham Sharma, Pallav Agarwal, Parv Mor, and Amey Karkare. Tipsc: Tips and corrections for programming moocs. In AIED, pages 322–326, 2018.
  • Singh et al. [2013] Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback generation for introductory programming assignments. ACM SIGPLAN Notices, 48(6):15–26, 2013.
  • Sundararajan et al. [2017] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In ICML, pages 3319–3328, 2017.
  • Vasic et al. [2019] Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, et al. Neural program repair by jointly learning to localize and repair. ICLR, 2019.
  • Wang et al. [2018] Ke Wang, Rishabh Singh, and Zhendong Su. Search, align, and repair: data-driven feedback generation for introductory programming exercises. In PLDI, pages 481–495, 2018.
  • Wang et al. [2016] Song Wang, Devin Chollak, Dana Movshovitz-Attias, and Lin Tan.

    Bugram: bug detection with n-gram language models.

    In ASE, pages 708–719, 2016.
  • Wong et al. [2016] W Eric Wong, Ruizhi Gao, Yihao Li, Rui Abreu, and Franz Wotawa. A survey on software fault localization. IEEE Transactions on Software Engineering, 42(8):707–740, 2016.
  • Yoo and Harman [2012] Shin Yoo and Mark Harman. Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2):67–120, 2012.
  • Zeller [1999] Andreas Zeller. Yesterday, my program worked. today, it does not. why? In ESEC, pages 253–267, 1999.

Appendix A Appendix

a.1 Problem Statements for Some of the Programming Tasks

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []You want to create an intelligent machine which can perform linear algebra for you. In linear algebra, we often encounter identity matrices. Therefore, teaching computers to recognize whether a matrix is identity or not is one of the tasks that you must perform in your quest to build such a machine. In this problem, you’ll write a program to check whether a given matrix is identity or not.

In the first line, you’ll be given n, which will be the number of rows and number of columns in identity matrix. In the next n lines, you’ll be given entries of the matrix with each row in a new line. If the matrix is identity, then print GIVEN n x n matrix is an IDENTITY MATRIX. Otherwise, print GIVEN n x n matrix is NOT an IDENTITY MATRIX. Here, n is the dimension of the matrix.

Note: You are not allowed to use arrays to store the input.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []Factors of a numbers are often required to know about the characteristics of a number.

In this problem, you’ll print all prime factors of a given integer. (Prime numbers are the numbers which have exactly two factors i.e. 1 and itself). You have to print all prime factors of a number in a new line in descending order. If a number is itself prime, print -1.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []Write a program to implement a rotation cipher as defined:

The program first reads three integers k1, k2 and k3 separated by white spaces. It then reads a characters from the NEXT line. Change the character according to the following rules: (a) if it is a lower case character, it is rotated by k1 positions. For example, if k1 is 3 then ‘a’ becomes ‘d’, ‘b’ becomes ‘e’, …, ‘x’ becomes ‘a’, ‘y’ become ‘b’, ‘z’ becomes ‘c’. (b) if it is an upper case character, it is rotated by k2 positions. For example, if k2 is -3 then ‘A’ becomes ‘X’, ‘B’ becomes ‘Y’, …, ‘X’ becomes ‘U’, ‘Y’ become ‘V’, ‘Z’ becomes ‘W’. (c) if it is a digit, it is rotated by k3 positions. For example, if k3 is 4 then ‘3’ becomes ‘7’, ‘6’ becomes ‘0’, …, ‘0’ becomes ‘4’, ‘5’ become ‘9’ and so on. (d) Any other character remains the same.

The output is a single character obtained after above change.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []Given two integer arrays (let’s say A1 and A2), check if A2 is a contiguous subarray of A1. A2 is a contiguous subarray if all elements of A2 are also present in A1 in the same order and continuously.

For ex. [12,42,67] is a contiguous subarray of [1,62,12,42,67,96] Whereas, [1,23,21] and [12,42,96] are not contiguous subarrays of [1,62,12,42,67,96]

Input: The first line contains the size N1 of first array. Next line contains N1 space separated integers giving the contents of first array. Next line contains the size N2 of second array. Next line contains N2 space separated integers giving the contents of second array.

Output: Either YES or NO (followed by a new line).

Variable Constraints: The array sizes are smaller than 20. Each array entry is an integer which fits an int data type.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []You are given two integers n1 and n2 followed by two space separated strings str1 and str2 of length n1 and n2 respectively, each consisting of lowercase characters. The length of each of the strings is not more than 500.

Output the length of the initial segment of str1 which consists entirely of characters in str2.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []You are given an array of ‘n’ numbers. You have to find out whether the array is a SuperArray or not. An array is a SuperArray if it satisfies the following constraints.

Every element A[i] of the array should occur A[i] times. For example if the array contains ‘2’, then there should be exactly two occurrences of the number ‘2’ in the array.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []Find out whether or not a path exists through a given maze. The maze is a 2D matrix where ‘.’ denotes path and ‘X’ denotes wall. It starts at (0,0) and end at the bottom-right(both of which will always be ‘.’)

Input: Space separated integers m,n denoting size of matrix Next m lines contain a string of n characters(composed of ‘.’ and ‘X’)

Input Constraints: 1<=m, n<=15

Output: YES if path exists, NO otherwise

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []In this exercise, you need to implement GCD. However, the challenge is that you are not allowed to return any values. So, the modified GCD function takes two pointers as follows: void gcd(int *a, int *b) It modifies the values such that when the function returns, a will contain the final answer. You need to use the function signature from the initial template.

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []Write a program to find kth largest element of an array.

Full points will only be awarded if your solution is based on repeated applications of the Partition function (which was introduced for QuickSort). You do not have to sort the whole array, as this will fetch you half of the total points.

Any other solution e.g. solutions based on sorting, etc. will at most fetch half of total points.

Please see the provided template for hints.

Input will have two lines - 1st line will have an integer n denoting number of elements of the array and k; next line will contain n space separated integers denoting the elements of the array.

Output: you have to return the kth largest element of the array

frametitle=, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []The professor of PHY101A has decided to catch all cheating cases. Since you have already done that course, you decide to help him in this task by automating his work.

You are going to calculate the ’proximity’ between any 2 documents by counting the longest common substring in the 2 documents. For example, - If one of the document is ‘ABA’ and the other document is ‘BAB’, the proximity is 2 since the longest common substring is ‘AB’ (or ‘BA’). - If one of the document is ‘doc1’ and the other document is ‘doc2’, the proximity is 3 since the longest common substring is ‘doc’.

Input: Two integers (‘n1’ and ‘n2’) denoting the length of first and second document. Content of first document (‘n1’ characters) Content of second document (‘n2’ characters)

Output: A single integer, the proximity between the documents

a.2 Concrete Examples Illustrating Our Bug-Localization Results

frametitle=Wrong for Loop, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int rot(int [],int,int);
3int main() {
4    int n,d,i;
5    scanf("%d\n",&n);
6    int arr[n];
7    for(i=0;i<n;i++) {
8        scanf("%d ",&arr[i]); }
9    scanf("\n%d",&d);
10    rot(arr,n,d);
11    return 0; }
12
13int rot(int arr[],int n,int d) {
14    int j,k;
15    for(j=d+1;j<n;j++) {       \\ suspiciousness score: 0.0006181474
16        printf("%d ",arr[j]); }
17    for(k=0;k<=d;k++) {     \\ suspiciousness score: 0.0006690205
18        printf("%d ",arr[k]); }
19    return 0; }

This program is supposed to right shift a given array of ‘n’ numbers by a given number ‘d’. To correctly implement this, the programmer needs to change the two for loops at lines and to for(j=n-d;j<n;j++) and for(k=0;k<n-d;k++), respectively. Our technique ranks these two lines as its second and third most suspicious buggy lines, respectively.

frametitle=Incorrect Input Reading and Output Formatting, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int main(){
3    int n,i;
4    char c;
5    scanf("%d",&n);   \\ suspiciousness score: 0.0007697232
6    for(i=0;i<n;i++) {
7        scanf("%c",&c);
8       if (c=='a'|| c=='e' || c=='i' || c=='o'|| c=='u') {
9           printf("Special");
10           printf("\n%d",i);   \\ suspiciousness score: 0.00045288168
11           break; } }
12    if(i==n)
13    printf("Normal");
14    return 0; }

This program is supposed to print ‘Special’ if the given input string contains a vowel, otherwise ‘Normal’. The input format is an integer ‘n’ and a string ‘s’ of length n, separated by a newline character. However, the scanf function in line reads the newline character following ‘n’ as the first character of the string. Therefore, if the input string is a vowel, that will not be read and the program will print the wrong output ‘Normal’. One way to fix it is to append the newline character after the “%d” format specifier in the scanf function call of line . Also, there is an additional print statement at line which prints spuriously, causing an output mismatch. Our technique ranks these two as its third and fourth most suspicious buggy lines, respectively.

frametitle=Wrong Condition, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int main() {
3    int n,i;
4    char a[100];
5    char b;
6    int flag=0;
7    scanf ("%d",&n);
8    for (i=0;i<n;i=i+1) {
9         scanf ("%c",&b);
10         if((b=='a')||(b='e')||(b='i')||(b=='o')||(b=='u'))  \\ suspiciousness score: 0.0015987115
11         flag=1; }
12     if(flag==1) {
13         printf("Special"); }
14     else  {
15         printf ("Normal"); }
16    return 0; }

The program shown above solves the same problem as the program last discussed. It twice uses the assignment operator instead of the comparison operator in line which causes the bug. Our technique localizes it in its top prediction.

 

1#include<stdio.h>
2int main() {
3    int a,b,c;
4    scanf("%d%d%d",&a,&b,&c); 0.00079781574
5    if (a+b>c) { 0.00025660603
6        if (a*a+b*b==c*c){printf("RIGHT");}
7        else if(a*a+b*b<c*c||a*a>b*b+c*c||a*a+c*c<b*b){     \\ suspiciousness score: 0.0004023624
8      printf("OBTUSE"          );}
9        else if(a*a+b*b>c*c||a*a<b*b+c*c||a*a+c*c>b*b){     \\ suspiciousness score: 0.00045172646
10      printf("ACUTE");} }
11    else if(a+b==c) {printf("INVALID");}
12    return 0; }

The above program is supposed to check and print if a triangle is invalid, acute, right or obtuse, given the length of its three sides. However, the conditions used in lines and are buggy. To fix the program, they should be replaced by lines (1) else if(a+b>c && a+c>b && b+c>a) { and (2) else {, respectively. Our technique ranks them as its third and fourth most suspicious buggy lines, respectively.

frametitle=Insufficient memory allocation, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int in(int k,int n,int l[100]){
3    int i;
4    for(i=0;i<n;i++){
5        if(l[i]==k){
6            return 1; } }
7    return 0; }
8int main(){
9    int n;
10    int ip[100];
11    int u[100];  \\ suspiciousness score: 0.0012746735
12    scanf("%d",&n);
13    int i;
14    for(i=0;i<n;i++){
15        scanf("%d",&ip[i]); }
16    int k=1,count=0;
17    i=0;
18    while(!in(k,n,u)){
19        u[i]=k;
20        k=ip[k-1];
21        i+=1;
22        count+=1; }
23    printf("%d ",count);
24    for(i=0;i>=0;i++){
25        if(u[i]==k){
26            printf("%d",count-i);
27            break; } }
28    return 0; }

The program shown declares an array of fixed-size in line which fail on tests containing larger inputs. Our technique localizes the buggy statement it in its top prediction. Note that the other fixed-sized array declared in line is not considered buggy as it does not cause any available test to fail.

frametitle=Type Narrowing, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int main(){
3    int x1,y1,x2,y2;
4    float slope;
5    scanf("%d%d%d%d",&x1,&y1,&x2,&y2);
6    if(x1==x2)  {
7        printf("inf");
8        return 0; }
9    else {
10        slope==(y2-y1)/(x2-x1);   \\ suspiciousness score: 0.0028934027
11        printf("%.2f\n", slope); }
12    return 0; }

This program calculates the slope of a line specified by two points whose coordinates are given as four integers and . When calculating slope in line , the division of integers returns integer value and not a floating point value. This is known as narrowing of types and can be fixed with type-casting any of the variable or expression in the RHS of the assignment as float before the division operation. The buggy line also mistakenly uses a comparison operator instead of the assignment operator. Our technique localizes the buggy line in its top prediction.

frametitle=Wrong Assignment, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2#include<string.h>
3int main() {
4    int i,j,c;
5    char str1[10],str2[10];
6    scanf("%s %s",str1,str2);
7     c=strlen(str2);
8    for(i=0;str1[i]!='\0';i++) {
9      str1[i]=str1[i]+str2[i%c]-'a'+1; }  \\ suspiciousness score: 0.0011707128
10  printf("%s",str1);
11    return 0; }

The program shown above is supposed to shift a string by another pattern string (both given as input) and print the result. However, the RHS expression of the assignment at line is buggy. The correct RHS expression is: str1[i]=(str1[i]+str2[i%c]-'a'-'a'+1)%26+'a';. Our technique localizes the buggy line in its top prediction.

 

1#include <stdio.h>
2int main(){
3   int a,b,i,n,m;
4   scanf("%d%d%d",&a,&b,&m);
5   n=1;
6   for (i=1;i<=b;i=i+1)
7   n=n*a;    \\ suspiciousness score: 0.007239882
8   printf("%d",n%m);
9  return 0; }

The program shown above implements . However, the RHS expression in line does not implement this logic correctly. The fix for this line is: n=(n*a)%m;. Our technique localizes the buggy line in its top prediction.

frametitle=Missing Code, linecolor=blue!20,,linewidth=1.2pt,frametitlerule=true,frametitlebackgroundcolor=gray!20, innertopmargin= []

1#include <stdio.h>
2int main(){
3    int n,max=0,sum,i,j=0;
4    scanf("%d/n",&n);
5    char s[n],ch;
6    ch=getchar();
7    for(i=0;i<n;i++)
8    {ch=getchar();
9    s[i]=ch;}
10    for(i=0;i<n;i++)
11    { sum=0;    \\ suspiciousness score: 0.0013130781
12        while(s[i]==s[i+j])
13        {sum++;
14        j++; }
15        if(max<=sum)
16        max=sum; }
17    printf("%d",max);
18    return 0; }

This program is written for finding the longest contiguous streak of a character in a given string. To correctly implement this, the programmer needs to insert j=0; at line . Our technique localizes this bug in its top prediction.