Log In Sign Up

Do People Prefer "Natural" code?

Natural code is known to be very repetitive (much more so than natural language corpora); furthermore, this repetitiveness persists, even after accounting for the simpler syntax of code. However, programming languages are very expressive, allowing a great many different ways (all clear and unambiguous) to express even very simple computations. So why is natural code repetitive? We hypothesize that the reasons for this lie in fact that code is bimodal: it is executed by machines, but also read by humans. This bimodality, we argue, leads developers to write code in certain preferred ways that would be familiar to code readers. To test this theory, we 1) model familiarity using a language model estimated over a large training corpus and 2) run an experiment applying several meaning preserving transformations to Java and Python expressions in a distinct test corpus to see if forms more familiar to readers (as predicted by the language models) are in fact the ones actually written. We find that these transformations generally produce program structures that are less common in practice, supporting the theory that the high repetitiveness in code is a matter of deliberate preference. Finally, 3) we use a human subject study to show alignment between language model score and human preference for the first time in code, providing support for using this measure to improve code.


page 1

page 2

page 3

page 4


Studying the Difference Between Natural and Programming Language Corpora

Code corpora, as observed in large software systems, are now known to be...

Language Models are not Models of Language

Natural Language Processing (NLP) has become one of the leading applicat...

Incremental Parametric Syntax for Multi-Language Transformation

We present a new approach for building source-to-source transformations ...

Learning Python Code Suggestion with a Sparse Pointer Network

To enhance developer productivity, all modern integrated development env...

AstBERT: Enabling Language Model for Code Understanding with Abstract Syntax Tree

Using a pre-trained language model (i.e. BERT) to apprehend source codes...

Programming with Applicative-like expressions

The fact that Applicative type class allows one to express simple parser...

Eiger: Auditable, executable, flexible legal regulations

Despite recent advances in communication and automation, regulations are...

1. Introduction and Background

Programming languages are highly flexible, and provide many different ways to express even the simplest computation. However, despite this flexibility, in practice code is highly repetitive, far more so than natural human language. This property of “naturalness” (Hindle et al., 2012)

has led to the successful application of probabilistic methods from NLP and machine learning to problems in software engineering, including code completion 

(Tu et al., 2014; Hellendoorn and Devanbu, 2017; Nguyen and Nguyen, 2015), finding defects (Ray et al., 2016), and recovering variable names (Raychev et al., 2015; Vasilescu et al., 2017).

Here is a puzzle: why is code so repetitive? Ruling out the obvious: it’s not just syntax. There is strong evidence (Casalnuovo et al., 2018) that the simpler syntactic and lexical properties of programming language structure, per se are not sufficient to explain this. Indeed, Casalnuovo et al. (Casalnuovo et al., 2018) report that when the markers of syntax are elided similarly from Code and English, Code becomes relatively even more repetitive than English. They also argue that this might arise from conscious choice, citing suggestive evidence of similar, strong, repetitive structure in other corpora that (like code) might entail more effort to read & write, such as legal documents, technical manuals, and English as a second language.

However, Casalnuovo et al. (Casalnuovo et al., 2018) didn’t control for meaning. Given a particular meaning, are there different ways to express it? Are some forms preferred? For instance, in English, people prefer to say bread and butter rather than butter and bread (Morgan and Levy, 2016). Likewise, one can code a simple increment operation different ways:

i = i+1;            (or)            i = 1+i;

The two forms above are trivially equivalent to a machine; if developers worked like machines, they should be indifferent to the form. However, most people would strongly prefer the first! This is noteworthy. Unlike natural languages (where the semantics are slippery, and depend in subtle ways on form) programming languages have precisely defined semantics; this means that even the simplest computations can be coded many different, but entirely equivalent ways! Programming languages thus actually provide programmers with great choice in forms of expression. So, the plot thickens!! Given the many expressive choices available, why is code so repetitive? We believe this is because coders have predictable preferences: Even given many forms for coding a computation, developers still prefer it to be coded in more familiar forms.

We hypothesize that this preference for more familiar forms is manifest in large corpora, and is evidenced by human code readers. This preference, we believe, accounts for why code is so repetitive, despite the affordance provided by programming languages per se to code the same computation in many different ways.

This paper investigates this hypothesis by triangulating an observational study with a human-subject experiment. We model “familiarity” using the probability of occurrence in a large code corpus (the idea being that developers are more familiar with code they see more frequently), and check whether this measure can correctly select preferred forms in unseen code; we also use a controlled experiment to see if this measure can predict human preference.

We make the following contributions:

  • Using an adaptation of the UCL-Edinburgh bimodal model (Barr, 2018; Allamanis et al., 2018), we provide a theoretical framing of Naturalness as a preferential choice, where developers deliberately choose to express code in familiar forms, modeled via language model.

  • We use transformations to generate alternate meaning equivalent forms of code in a held-out test corpus.

  • We evaluate the alternative forms using a language model, estimated on a training corpus, and find evidence that more familiar (less “surprising”) forms are preferentially deployed in the held-out test corpus.

  • Using Mechanical Turk, we find that human subjects do prefer forms of code (from the test corpus) that the language model indicates are more prevalent in the training corpus.

The contributions of this paper are primarily of a scientific (rather than Engineering) nature. However, we believe our work provides a new theoretical framework to contemplate a phenomenon of deep current interest to software engineering researchers, as well as a novel experimental approach, and strong connections to well-established theories of human cognition of language. Furthermore, to our knowledge, this is the first study to demonstrate that language model probability, used to guide many code applications, correlates with human preferences for code in a controlled experiment with human subjects.

2. Background & Theory

Natural language () corpora exhibit a high-level of repetitiveness, capturable via

statistical modeling, which has been leveraged in modern tools for translation, speech recognition, text summarization,

etc. More recently (Hindle et al., 2012) it has been noted that code is also quite repetitive and predictable, and many applications have been developed (See survey by Allamanis et al. (Allamanis et al., 2018)).

Textual predictability (of code or ) can be captured using language models (LM), which assign probabilities to a textual utterances (an utterance is a usable language fragment, e.g., a word in English, or a token in code). Probabilities are typically assigned contextually, e.g., probability of occurring in context : . The more likely an utterance is in context , the higher the probability. LMs (e.g., -gram models, PCFGs, RNNs) are trained on a large corpus of text, and then are evaluated by scoring against a test corpus. Their performance is measured using a normalized cross entropy score (in units of bits), which is estimated by taking the average surprisal111As a reminder, surprisal of an event an event space , with probability , is ; entropy is the expectation of surprisal over all . of the utterances the test corpus :

where surprisal is the negative log probability of an utterance . A good model, presented with typical text, will find it highly probable, and thus score a low surprisal for most utterances (tokens) in the text, and have overall low entropy.

Various kinds of models have been used in both natural language and in code. Using classical -gram models of code, it has been found that entropy of Java is around 3 to 4 bits lower than for English. This is surprisingly less, suggesting that code is 8 to 16 times more predictable than English (Hindle et al., 2012; Casalnuovo et al., 2018). This gets even more noteworthy given that the vocabulary of code, when controlling for corpus size, is typically much larger than natural language, because programmers keep inventing new identifiers. Greater vocabulary could mean more word choices to spread probability mass over—but actually doesn’t! Why is code so predictable? Is it specific to Java? English? The language model? Is it just because code in general has a much simpler syntax than ? Or is it somehow the result of the cognitive load of reading and writing code?

A recent paper by Casalnuovo et al. (Casalnuovo et al., 2018) presents a detailed comparison of predictability in code and . They find that the greater predictability of code vs is not specific to Java and English: a circa 2-5 bit (code-) difference persists, across different programming languages (Java, C, Clojure, Ruby, Haskell), different natural languages (German, Spanish, English) and different language models (gram, Cache-based, and LSTM (Hochreiter and Schmidhuber, 1997; Sundermeyer et al., 2012)). This persistent, robust difference between programming and natural languages suggests that something, perhaps the simpler syntax of code, is the cause. To deal with the question of syntax, they removed keywords, operators and delimiters from code, and analogous syntactic markers (prepositions, determiners, conjunctions, pronouns etc)222In linguistics, these are called closed category words, since new words in this category are very very rarely coined. from . After the concomitant removal of these syntax markers, in code they are left merely with identifiers, and in English with just nouns, verbs, adjectives and adverbs333This concomitant removal of syntax markers was done differently in (Rahman et al., 2019), which explains the different findings.. They find that after the removal of syntax markers, the difference between programming languages and english increased to between 4-8 bits depending on the language model used. Finally, they incorporated full parse tree structures into code (using ASTs) and English (using the Penn Tree Bank (Marcus et al., 1993)) and found that code still remains more predictable than English.

Why is this? It’s certainly not because code lacks expressive power! Alternative forms abound even for something very simple. Consider the wealth of equivalent alternatives for even the trivial iteration trope: for (i = 0; i < n; i = i+1) { }. One could pick a name other than i or n; flip the conditional; use a different incrementing form; start and end differently, all without changing the meaning. Indeed, a literal infinity of equivalent forms are possible! Still, we persist with such tropes. Why? To further clarify, we draw upon a formulation (Barr, 2018; Allamanis et al., 2018)444Talk and long paper; §3 is most pertinent in the paper. that software is bimodal: it works on a human-machine channel, and a human-human channel. It is argued that programmers must write code with full awareness of two modes of eventual use: first, code has formal operational semantics, and executes on a machine; second, code is maintained by other programmers, who must understand it; and thus code per se forms a vital communication channel between the developer who writes it and the maintainer who cares for it.

This bimodality argument implies that two distinct channels exist, and suggests an origin for the high repetitiveness of code. First, suppose that the human-human channel simply re-used the formal operational semantics channel. Consider a code reader Rick, who is examining a piece of code , written in language by developer Doris. Rick desires to know its computational meaning . Suppose could directly, by himself, mentally calculate the meaning of via the operational semantics of quickly and easily. If readers can always and efficiently do this, there is certainly no constraint on developer Doris. She is free to choose from any of the choices that implement , since her readers behave like infallible machines, and will reliably and efficiently find the meaning , even if she chooses bizarre (but correct) ways to implement . However, readers are not machines, and the human-human channel works differently. Prior work on natural language in Psycholinguistics (Rayner and Well, 1996; Wells et al., 2009) indicates a robust association between greater statistical predictability (surprisal) and ease of production & comprehension. Speakers are more likely to choose more predictable utterances, even among meaning-equivalent options (Hudson Kam and Newport, 2005). These choices are likely driven in part both by audience design (Jaeger and Levy, 2007) (viz., choose utterances for ease of comprehension) and by ease of production (Bock, 1987; Ferreira and Dell, 2000).

On the human-human channel, we might expect code to behave like natural language. Given a meaning, some implementation forms may be more familiar, such as the for loop above, and will be more easily recognized. Indeed, an experienced developer will know this, and would prefer to use familiar forms whenever possible, both for her own and her reader’s convenience.

We can formally state this thus. Assume are viable implementation choices for a given computation . Although these are all semantically equivalent, for human convenience, there would be a tendency to prefer one over the others. If refers to probability of occurrence in a corpus, we should observe the following:

We are overstating things here: it’s possible that a few of the possible are more preferred, viz., the probability mass is not uniformly spread over all the choices. Another way to state this would be that the entropy

of this conditional distribution is less than the possible maximum which would be obtained in the case of a uniform distribution among implementation choices. However, it is difficult to know in general the number of implementation choices. Given the intractability of computing such a maximum, we formulate the central question a bit more informally:

For a given computation, do developers prefer some implementations over others, and to what degree?

We now describe our experimental decisions and specifics of our approach. First, in order to model the developer “preference” above, we use language models. A modern language model, well-trained over a large, diverse corpus, can reliably555The low cross-entropy that modern models provide over unseen corpora is evidence of their power. capture the frequency of occurrence of textual elements in a corpus, thus capturing the preferences of programmers who created that corpus.

Second, we use meaning-preserving transforms to model a range of possible implementations for a meaning . While many are possible, we focus on 3 types of transforms: expression rewriting, nonessential parentheses adding and removing, and variable name shuffling (for details see Section 3.3). These transforms are performed on code fragments from a corpus “unseen” by the language model, which is then used to score the surprisal of the original and the transformed versions. These transforms generally perform changes of a scope confined enough to be reasonably captured by our 6-gram language models and the additional LSTM language models we use to validate effects seen on the ngram transformations.

Our first RQ investigates whether developer preferences for restricted forms of expression are observable in Java:

RQ1. Using a language model trained on Java code, if we perform meaning preserving transformations on unseen Java code, to what degree does the model find the transformed code more improbable (higher surprisal)?

Next, to ensure that this is not simply an effect of the choice of programming language, we also choose a secondary language that is different from Java, Python, and ask:

RQ2. Is the Language Model’s preference for the original code also observable in Python?

Beyond simple ngram models, we would also like to explore how local style effects the consistency of choice. Prior research has shown this to be a strongly distinguishing factor of source code over natural language (Tu et al., 2014; Hellendoorn and Devanbu, 2017), so we theorize that cache models would prefer the original code even more strongly. Additionally, we would like to consider if these preference patterns are retained in the underlying structure of the code - i.e. when identifiers and literals are abstracted. Thus we ask:

RQ3. Do cache based models that incorporate local style discriminate the original code more strongly? Is the preference for the original code retained even when abstracting identifiers and literals?

We expect that some transformations will disrupt the code less or even make it more probable to our models. In particular, we would like to see how the original surprisal of the code relates to the effect of the transformation. We would expect highly improbable code to have greater potential to become more “typical” after transformation. Such code will likely be associated with less restrictions on developer choice. Thus, we ask:

RQ4. How does the original “surprisal” of the original code relate to the effect of the transformation? Do high-surprisal and low-surprisal code behave differently?

Finally, to compare language model judgements with human preference and validate model surprisal as a measure of preference in the context of code, we run a human subject study and ask:

RQ5. Do the preferences of Java programmers align with language model surprisal? How do do these preferences vary by transformation?

In summary: why is code so repetitive? Prior research clearly indicates it’s not just syntax. We hypothesize that coding behavior, because of the the human-human channel, is susceptible to some of the same production and/or comprehension pressures as natural language. Thus, we expect that, for convenience, developers strongly prefer certain forms of expression, despite the great variety of options provided by programming languages. Moreover, these forms are predictable, and can be captured via language models.

3. Methodology

We adopt a triangulation approach, combining a natural experiment on a large code corpus with a human subject study. A large code corpus embodies numerous choices made by programmers, and thus is a representative sample of these choices. Within this corpus, we can examine the occurrence frequency of different implementation choices of the same computation , and determine if some choices dominate. For triangulation, (Section 3.4) we use human subjects; we ask them to preferentially select from two alternative implementations with the same meaning , chosen such that the language model assigns quite different surprisal scores to and . We test if the surprisal scores predict human preference.

3.1. Code Corpora

Our experimental dataset is chosen to help control for potential confounds, while also affording enough opportunities for transformations. Since our main focus is Java, (see RQ 2) we use a larger Java corpus and replicate with a smaller corpus in Python (RQ 2).

We cloned the top 1000 most starred projects on Github for Python & Java. We use a subsample of these projects due to computational constraints; we select the 30 projects from Java & Python with the highest count of possible transformations. These projects are then randomly divided by project into a 70-30 training/test split. In Python, due to the lack of typing and limitations of the Abstract Syntax Tree (AST), we replicate only one type of transformation, swaps over relational operators. These limitations also required us to normalize the original Python files with astor666, which can slightly change parentheses. As we do not perform parentheses transformations for Python, this should only minimally impact results.

Duplication can be a potentially confounding effect in training and testing code with language models (Allamanis, 2018). Since our focus is on programmer preferences for certain coding forms, it would be inappropriate to remove all clones. Still, to avoid large-scale duplicated code, we do a lightweight removal of fully duplicated files, with additional filtering during testing for stability. This lightweight process compares the name of every file and its parent directory (e.g. main/, keeps the first one seen of an equivalent set, and removes the others from our training/test data.

Language Files Unique Files Training Tokens
Java 204489 184093 118.5 M
Python 27315 23105 18.2 M
Table 1. Summary of Java and Python datasets.

Table 1 shows the file-counts and approximate token-counts in the training set for each corpora. Duplication filtering removes 6.1% and 10.7% of files in Java and python respectively. Despite sampling the same number of projects, the Java corpus is much larger, but as Java is our main focus, and Python is simply to see if the results replicate across languages, this is arguably adequate.

Our test data was chosen to be distinct from the training data. In addition, we removed lines commonly associated with generated code, coming from equals and hashCode functions777We dropped lines with the string’s ’hashCode’ or ’other’ which we observed manually to be contributing to this repetitiveness. In the case of our identifier shuffling transformations which operate at the method rather than expression level (see section 3.3) we instead removed all equals and hashCode methods.. These lines are generated by IDEs, and arguably do not accurately represent human written choices (or at least, human style choices so codified that they have been automated.). We also remove from the test data identical lines of code appearing more than 100 times888A threshold of 10 gave similar results, suggesting robustness., as these may also be at risk of copy-pasting. We believe that it would not be correct to simply filter out all duplicated expressions, as it is perfectly valid for developers to rewrite the same code. Since our study is largely at the expression level, it’s difficult to precisely find & account for copy-pasting; we argue our approach gives a reasonable middle-ground between removing the extreme cases while still retaining most of the natural repetition of code. Finally, we note that we did not remove repeated or generated code fragments from the training data to properly reflect the code that programmers would read (and learn preferences from). Our test set pruning was to avoid overly weighting repeated and generated code, and emphasize more the individual, independent choices made when writing code.

3.2. Language Models

We estimate a language model over a large corpus, and then use the from the language model as an indication of developer preference. Specifically, we use of with respect to a fragment , (which is precisely ): lower surprisal indicates higher developer preference. This use of surprisal is not unprecedented; abundant experimental evidence from psycholinguistics indicates surprisal is strongly associated with cognitive effort in language comprehension (Hale, 2001; Levy, 2008; Demberg and Keller, 2008; Frank, 2013; Levy, 2013).

We use 4 ngram language model variants to capture various aspects of possible developer preference. First, we use a basic 6-gram model with Jelinek-Mercer smoothing, using the best order and smoothing recommended from past research (Hellendoorn and Devanbu, 2017), denoted as the global model. To answer the two parts of RQ 2, we first use an ngram-cache (henceforth abbreviated as cache model), as originally described by Tu et al. (Tu et al., 2014) to capture local patterns. Then, we build an alternate training and testing corpus where we use the Pygments999 syntax highlighter to replace all identifiers and types with generic token types, and literals with a simplified type101010For example, we keep 1,2,3 and replace higher numbers with labels like ¡int¿ and ¡float¿. For strings, we keep the empty string, single character strings, and replace everything else with ¡str¿.. These models are implemented in the SLP-Core framework by Hellendoorn et al. (Hellendoorn and Devanbu, 2017)111111 To assess preference, we compare the average surprisal of tokens that appear only in both the original and the transformed version of the expression

. The tokens not involved in the changed expression are not considered. Finally, we validate the robustness of our ngram results with a 1 Layer LSTM implemented in Tensorflow, trained with 10 epochs and 0.5 dropout, for the corpus results of the 4 transformations used in the human study

121212We only considered these 4 as training and testing in context is computationally expensive, and it is just to validate the transformations we focused on in both studies..

3.3. Meaning Preserving Transformations

We choose not to use existing transformation tools, as they are either not meaning preserving (e.g. mutation testing (Madeyski and Radyk, 2010; Just, 2014)), or operate at the wrong scale of code object, such as compiler optimizations. We use source level transformations that both are meaning-preserving and small enough in scope to be captured by language models. Our focus is primarily on transformations of source code expressions, which we implement via the Java and Python AST. For Java, we use the AST Parser from the Eclipse Java development tools (JDT)131313 and the ast module from Python3.7141414

Swap Arithmetic * a * b b * a
+ a + b b + a
Relational ==, != a != b b != a
<, <=, >, >= a <= b b >= a
Paren. Adding a + b * c a + (b * c)
Removing a + (b * c) a + b * c
Rename Within Variable Types int a int b
int b int a
Between Variable Types int a int b
float b float a
Table 2. Pseudocode examples for the transformations.

We implement 12 different kinds of transformations, summarized in Table 2, grouped roughly into 3 categories: 1) swapping transformations, 2) parenthesis transformations, and 3) renaming transformations. There are 6 non overlapping sub-groups: arithmetic and relational swaps, parenthesis adding and removing, and shuffling identifiers within and between types.

Swapping transformations: We have 8 kinds of transformations involving swapping and inverting operators, divided into 2 subcategories. The first subcategory swaps arithmetic operands around the commutative operators of + and *. We swapped very conservatively. We limit the types of the variables and literals in the expression to doubles, floats, ints, and longs. Infix expressions with more than two operands are only transformed if the data type of the operands are int or long to avoid accuracy errors due to floating point precision limitation. We also exclude expressions that contain function calls, since these could have side-effects that alter the other variables evaluated in the expression.

The second subcategory of operator swapping involves the 6 logical operators, ==, !=, ¡, ¡=, ¿, ¿=. We flip the subexpressions that make up the operands for each of these, either retaining the operator if it’s symmetric !=, ==, or inverting it if it’s asymmetric (e.g. ¿ becomes ¡). While we do not limit the types in these expressions as they are commutative and don’t risk floating point issues, expressions with function calls are excluded to avoid side effects.

Parenthesis transformations: The next category involves manipulation of extraneous parentheses in source code. Programming languages have well-defined operator precedence, but programers can still (and often do) freely choose to include extraneous parenthesis for readability. For instance, in cases where less common operators are used (such as bit shifts), the parentheses may make comprehension easier, leading to a preferred style.

Therefore, we can transform expressions by adding or removing extraneous parentheses from expressions. The adding parentheses transformation relies on the tree structure of the AST to insert parentheses while preserving the correctness in the order of operations. Parentheses are not added to expressions whose parent is a parenthesized expression to avoid creating double parentheses. Parentheses are also never added around the entire expression.

For parentheses removal, we select each parenthesized expression. Then, each of these are passed to the NecessaryParenthesesChecker from the Eclipse JDT Language Server151515See to check if removing them would violate the order of operations. Any subexpressions that pass this check are then considered candidates for removal. This method is used by the same algorithm supporting the “Clean Up” feature within the Eclipse IDE161616See

Variable shuffling transformations: Finally, we consider transformations that shuffle the names of identifiers. To avoid changing meaning, we swap only within a method, using the key bindings of the AST to maintain scoping rules. If a variable name is used for a declaration more than once in a function (e.g. multiple loops using as a variable for iteration), it is excluded to avoid assigning two variables the same name within the same scope. Methods containing lambda expressions are also ignored because their variable bindings are not available in the AST.

We separately consider renaming both within types and between types. As an example, consider a function with two int and two String variables. In the within types case, we only consider replacing one integer’s name with the other, and the same for the Strings. In the between types case, all four variable names can be assigned to any of the integers or strings other than their original variable. We expect that names given to the same types will be used more similarly than names given to different types. Thus, we would expect between types transformations result in code relatively more improbable than those produced by within types transforms.

3.3.1. Transformation Selection

As expressions grow in size, the number of possible transformations grows exponentially. Generating all these transformations are neither feasible or desirable, so we select a random subsample. For the operand swapping and parentheses modification cases, we randomly sample up to transformations, where n is the number of possible locations to transform in the expression. For variable renaming we consider only functions with up to 10 local variables that can be shuffled.

3.4. Human Subject Study

The corpus study can tell us if some forms are preferentially used in a large corpus. We triangulate with a human subject study, checking if human preferences over different implementations of the same computation align with different language model surprisal scores. While lower surprisal has been linked to easier processing in natural language (Hale, 2001; Levy, 2008; Demberg and Keller, 2008; Frank, 2013; Levy, 2013), we don’t know of similar findings for code, despite the metric being used as a stand-in for preference or comprehensibility in guiding many tools (e.g. (Hellendoorn and Devanbu, 2017; Liu et al., 2017; Allamanis et al., 2014)).

We use Amazon’s Mechanical Turk (AMT)171717 for this investigation into the alignment of surprisal with human-subject preference. AMT has been used in several other programming studies to recruit subjects (Prana et al., 2019; Chen et al., 2019; Alqaimi et al., 2019). Since anyone can sign up with AMT, we selectively filter out a sample that can reasonably represent Java programmers. First, we follow recommended guidelines (Data, 2018) for avoiding bots and poorly qualified workers; we require a 99% HIT acceptance rate, 1000 or more completed HITs, and restrict workers to those in the US and Canada. We also used Unique Turker181818 along with AMT’s own internal reporting to remove any repeat users. Secondly, we deploy a short qualification test which requires subjects to read some Java code and answer 3 comprehension questions; all 3 must be correctly answered. We tuned our comprehension questions with 3 pilot surveys. MTurkers were paid at minimum wage rate ($12/hour) for tasks they completed; the qualification test was estimated to take 5 minutes, and the main survey 20 minutes.

Our survey asked forced-choice preferences of this form: Please select which of the two following code segments you prefer: outPacket = new byte[10 + length]; outPacket = new byte[length + 10];

The alternative segments were selected from the relational & arithmetic swaps, and parenthesis adding & removing. Using the global ngram model, after some filtering191919Code with hashing and bit shift keywords, lines over 80 characters. we selected the top 20 single line transformations that most increased and decreased the average line level surprisal over shared tokens, replacing some from the top 40 when cases were too similar, or the transformation obviously disrupted symmetry202020For example, only adding one parenthesis to a == b —— b == c. We use line instead of expression level averages from the corpus study because the subjects judged the entire lines. From these 160 pairs, we presented 80 to each user randomizing both the questions and order of the choices.

To measure subject attention in a 20 minute survey, we included an unidentified attention check, which was a question like the others, except more obvious and incontrovertible212121We asked if "for(int i = 0; i < length; i++) {" was preferable to "for(int i = 0; length > i; i++) {" .. We do not exclude those failing the attention check (it was only one question of many), instead using it as a measure for how attentive the subjects were overall. As long as failing the check is not common, we can be confident of reasonably attentive subjects.

3.5. Modeling

To compare surprisal before and after the transformation, we use paired non-parametric Wilcox tests (Hollander and Wolfe, 1999)

and associated 95% confidence intervals, which measure the expected difference in medians between the original and transformed code. We also

widen the intervals using the conservative family-wise Bonferroni (Weisstein, 2004) adjustment, to account for the tests on each model and transformation.

To answer RQ 2, we turn to regression modeling. Recall that we theorize that expressions that are more improbable to the language models should be more amenable to becoming more probable after transformation, whereas low surprisal

would be associated with stronger norms and thus more harmful transformations. We measure this effect with ordinary linear regression; we use controls for the the size of the line, the type of

AST node that is the parent of the expression, and a summary of the operators involved in the expression222222In the case of multiple operators, we selected the most common one from the training projects to represent the expression.. Our regressions are limited to single transformations for ease of interpretation, and filtered out rare parent and child types ( 100)232323Limitations in the python transforms prevent an accurate count of the number of transformations, and so the first filter was not applied. We examined the comparable Java models without this filter as well but found little difference in the coefficients.

. We identified influential outliers using Cook’s method 

(Cook and Weisberg, 1982) and removed those with values greater than

. We examined residual diagnostic plots for violations of model assumptions, and made sure multicollinearity was not a issue by checking that variance inflation (VIF) scores were

(Cohen et al., 2003).

For our human subject study, we use a mixed effects logistic regression. As the complexity of the random effects structure of our model caused the frequentist estimate to not converge, we estimate the model via Bayesian regression through the R package brms 

(rkner, 2017). Our presented model used the default priors of the package, but we validated convergence and alternative priors using the guidelines included in the WAMBS checklist (Depaoli and Van de Schoot, 2017). Further details on these models are omitted for space but can be found in the R Notebooks in our replication package (see intro of Sec. 4)).

4. Results

Our results from the corpus study are in § 4.1 and the human subject study § 4.2. Due to space limits, our corpus study focuses on the swap transformations, and highlights differences in the other transformations. All our data, R notebooks, and results can be anonymously accessed at

4.1. Corpus Study

Global Cache Global Abstracted Cache Abstracted LSTM
Arithmetic Swap
Relational Swap (Java)
Relational Swap (Python)
Add Parentheses
Remove Parentheses
Variable Shuffle (Within Types)
Variable Shuffle (Between Types)
Table 3. Two sided paired Wilcox signed-rank tests and 95% confidence intervals of surprisal difference original source minus transformed source. A 1 bit negative difference indicates the original code is twice as probable as the transformed code. Intervals are Bonferroni corrected. indicates , otherwise

4.1.1. Swapping Expressions

Figure 1. Average surprisal change for swaps: Java arithmetic and relational, and Python relational. Java relational cache is omitted as it is consistent with arithmetic. Positive values indicate the transformation is less predictable.

We have 20,829 instances of transformable arithmetic swaps in our Java data, and 133,845/32,219 instances for relational swaps in Java and Python respectively. Figure 1 shows the difference in ngram surprisal (transformed - original) for all of the concrete swap transformations, except for the cache for the Java relational swaps, which is similar in effect to the Java arithmetic swaps. Rows 1,2, and 3 in Table 3 show the associated Wilcox tests and confidence intervals around the median for all model variants.

In general, the data supports our theory that the original code would be preferred by the language model (LM) of the swaps to varying degrees. For the Java arithmetic swaps, the global LM finds the original code 1.68 times more probable (0.75 bits of surprisal less) than the transformed version, and the cache model finds the original 8 times more probable (3 bits less) than the transformed version242424The increase can be measured as e.g. .. The cache LM does discriminate better across all the Java transformations, both with concrete and abstracted identifiers; but not, however, in Python, perhaps due to a global “Pythonic” culture that transcends project boundaries. The LSTM models we ran to validate the robustness of the ngram models on Java swaps also indicate this effect, showing preference for the original code. Like the global ngram models, the effect is stronger for relational swaps.

Now, to answer RQ 2, we will describe one regression model in depth: the one for the global arithmetic swaps. We model Surprisal Change ~ Original Surprisal + log(NumTokens) + ParentOperator + Operator, the change in surprisal as predicted by the original surprisal, controlled by both the log size of the expression, and the parent and operator types of the expression. For every bit increase252525Meaning the expression is twice as difficult for the model to predict. in the original expression, the change decreases by .279 bits. This effect is quite strong, explaining nearly 35% of the variance in the difference. We can conclude that less predictable expressions exhibit less strong norms, and the effect of a transformation is more variable. This negative correlation between original surprisal and the change also holds in the regressions for all the other language models on this transformation. Among the controls, longer expressions are also more likely to be amenable to transformations, and while most parent nodes in the AST are similar to the ’==’ baseline, return statements and array accesses tend to have less strict style. Finally, swaps that occur on a ’+’ instead of a ’*’ are 55% less probable to the language model - likely attributable to addition being much more common than multiplication.

This effect of higher original surprisal leading to greater opportunity to make code more predictable persists across all the regressions for the Java relational swaps, and for the Python swap using the concrete code. However, in the abstracted Python code, this effects is revered, but explains only a small amount of the variance of the change (¡2.5%). Further study is required to understand this counterintuitive behavior in the abstracted code.

So for Java swapping transformations, RQs 2, 2, and 2 are answered affirmatively, albeit to various degrees. The models prefer the original source code regardless of model, cache models more strongly discriminate between the original and transformed code, and less probable expressions are associated with smaller increases, or even sometimes reductions in surprisal. Our results for RQ 2 provide additional support for the overall theory, but suggest complications in the details. The ability of the locality to discriminate may be language specific, and the relationship between the original expression surprisal and the change in surprisal in the abstracted models (different in direction from all other results) may suggest Python norms more closely tied to identifiers.

4.1.2. Other Transformations

Original Transformed Surprisal Change
double seconds = time / (1000.0); double seconds = time / 1000.0; -13.878
return ((dividend + divisor) - 1) / divisor; return (dividend + divisor - 1) / divisor; -7.991
int elementHash = (int)(element ^(element >>>32)); int elementHash = (int)(element ^element >>>32); 11.321
c1 —= (c2 >>4) & 0x0f; c1 —= c2 >>4 & 0x0f; 11.053
Table 4. Sample transformations with the largest surprisal changes for parenthesis removal with the global model.

For parentheses, we have 63,625 additions and 9,717 removals, with the results shown in rows 4 & 5 in Table 3. The results for surprisal change, the effect of the cache, and the regression models built to answer RQ 2 are similar to those in with the swaps, with one major exception. The difference between the original and transformed code in the global model is not significant! We delve into this unexpected result more closely using examples in Table 4. These are fairly intuitive; the biggest improvement in predictability comes from removing parentheses unnecessary to clarify the order of operations from around a literal denominator. In contrast, a large increase in difficulty of prediction occurs with rarely used bit shift operators—suggesting that developers may prefer parenthesis around rare operations to clarify order of operations. In contrast, the LSTM models agree with our theory, although the change is smaller relative to the swaps. Thus for we answer RQs 2, 2, and 2 affirmatively, except for the global ngram models of parenthesis removal. We speculate that this may be the result of less consistent style around the usage of parenthesis, similar to what Gopstein et al. (Gopstein et al., 2017, 2018) found with bracket usage (see 6.1). We further discuss the influence of style guidelines in § 5.

Finally, we consider variable renaming transformations, measuring mean surprisal change across all affected expressions within the same method. There are 17,930 methods with shuffling within types and 48,160 with shuffling between types262626Unconstrained shuffles are possible in more methods., with results in rows 6 and 7 of Table 3272727As swaps operate on concrete identifiers so the abstracted models do not apply.. As expected, shuffling variable names within a method increases surprisal. Variable names matter for program comprehension, and obscuring these names is one of the most common and simple forms of program obfuscation (Collberg et al., 1997). Moreover, we confirm that swapping variable names across types is more disruptive to predictability; the difference is about twice as large. Cache effects are still present, but diluted, possibly because the shuffle pulls its vocabulary from very similar contexts. As with all other Java transformations, the regression models show that variable names scoring higher in surprisal have less of a surprisal increase after renaming. So in conclusion, the renaming shuffle transformations all answer RQs 2, 2, and 2 as expected, with stronger results when shuffling between rather than within types.

Figure 2. Break down of fraction of agreement for each question by transformation, ordered from questions with the least agreement with the model to the most. Majority vote agreements: ArithmeticSwap (65%), RelationalSwap (80%), AddParen (50%), RemoveParen (67.5%).

4.2. Surprisal and Human Preference

Our survey netted a total of 180 attempts across 3 batches, with 60 non-duplicate MTurkers fully completing the survey. Of the 60, 50 passed the attention check, though there is little difference in overall agreement with the language model between these groups. Demographically, our group had a median of 4.5 years of Java experience and 9 years of general programming experience, and were primarily developers, students, and hobbyists who coded at least a few times a week. All but one had some college education, and most used AMT for extra income.

Overall, 61.9% (65.6% with majority vote)28282862.8 and 66.9% for those passing the attention check. of the time our subjects agreed with the global ngram model. Figure 2 groups the results by each question, broken down by transformation type. Each quadrant has 40 questions arranged by rank in increasing order of fraction of human agreement with the language model, with bars pointing downwards indicating more disagreement than agreement. Red examples show where the language mode preferred the transformed code, and blue examples show where it preferred the original.

Humans overall agree with the language model swaps much more frequently than the parentheses changes. Relational swaps demonstrate have the highest agreement (indicated by values above 0.5). All of the disagreements are cases where the raters agreed with the original code (but the language model disagreed), suggesting a limitation in the language model rather than disagreement among coders. For parentheses, we also see a pattern: our group tends to prefer variants with more parentheses (indicated by the reversed red/blue patterns in AddParen and RemoveParen Figure 2), regardless of the language model preference. Moreover, the language model poorly predicts human majority vote preference for adding parenthesis - agreeing only half the time.

Estimate Error 1-95% CI u-95% CI
Intercept -0.60 0.15 -0.90 -0.30
LM_Out 1.90 0.23 1.45 2.37
AddParen -2.03 0.58 -3.19 -0.95
Arithmetic 0.49 0.25 -0.01 0.96
Relational 0.15 0.26 -0.39 0.68
RMParen* 1.39
LM_Out:AddParen -0.79 0.42 -1.63 0.04
LM_Out:Arithmetic -1.03 0.37 -1.74 -0.32
LM_Out:Relational 1.64 0.45 0.77 2.50
LM_Out:RMParen* 0.18
Table 5. Fixed effects for bayesian mixed effects logistic regression on our human subject study. *Parenthesis removal (RMParen) does not get an independent coefficient estimate in deviation coding, so we calculate the implied coefficient.

Examining the results in greater detail we present our Bayesian mixed effects logistic regression in Table 5. The model formula is: Outcome ~ LM_Out * TransType + (1 + LM_Out * TransType — ResponseId) + (1 — Question). The Outcome is 1 if the human subject selected the original code, 0 for the transformation. The fixed effects are a binary predictor (LM_Out), which is 1 if the language model selected the original code, and 0 otherwise, along with the type of transformation and their interaction term. We use the maximal random effects structure justified by the design (Barr et al., 2013) - a random intercept by question, a random intercept and slopes for surprisal, transformation type, and their intercept by subjects. The transformation types are deviation coded, meaning the intercept value is the grand mean over all transformations. Thus, for the coefficients for the parenthesis removal

transformation, we subtract the 3 of other type coefficients and provide it in the table for convenience. Finally, Bayesian estimates do not have p-values and confidence intervals in the same way as frequentist approaches. Instead, we report the equivalent 95% credible interval, the bounds of a probability distribution that has a 95% probability containing the regression coefficient. If this range is entirely above 0, it indicates a positive effect, and vice versa for a range entirely below 0.

As this is a logistic regression, the coefficients are log odds ratios. The odds ratio of the intercept by itself,

, shows that when the language model prefers the transformation, our subjects are times more likely to also prefer the transformed code. By contrast, when the language model prefers the original code, our subjects are times more likely to also prefer the original code. Importantly, the crucial predictor, LM_out, has a positive effect with 0 well outside its credible interval. Thus, on average, not only do humans agree with the language model more often than not, they agree with the language model almost twice as strongly in its judgements on the original code. This effect could pose risks to tools that use surprisal to guide transformations, as the new code may not be judged as reliably by the models. Finally, some but not all of the specific transformations also differ significantly from the grand mean in either their baseline effects or their interaction terms. In summary, we can say that the model confirms what was seen in Figure 2, that humans agree with the language model, except when it prefers the original code in an adding parenthesis transformation.

Therefore, our data supports RQ 2. In most cases, the humans tend to agree with the language model. Adding/removing parenthesis behave differently, again suggesting that human preferences are more variable for this transformation than for the swaps. In particular, perhaps a more powerful language model could better capture preferences for adding parentheses.

5. Discussion

Our finding that developers prefer more predictable code forms is consistent with results from psycholinguistics, supporting that the repetitiveness of code comes from the human-human channel. The question remains: why is code even more repetitive than natural language? We propose some theories to be tested in future work.

One possible reason for the greater repetitiveness of code is that (unlike with code) natural language utterances are very very rarely entirely meaning equivalent—since NL has connotative as well as propositional meaning. For instance, while “bread and butter” is prima facie synonymous with “butter and bread”, one might infer from a description of eating “butter and bread” that butter was unusually dominant in this situation. These subtle differences of meaning may drive speakers to choose different forms in different situations. However, i = 1+i and i = i+1 are semantically equivalent on the human-machine channel. Perhaps because developers are trained to understand the true operational semantics code, different connotations for the above two seem unlikely to evolve, even on the human-human channel. Thus, it is difficult to imagine how i = 1+i might carry a subtly different and useful connotation, even on the human-human channel; and thus it is difficult find a situation (analogous to “butter and bread”) where a developer might consciously feel the need to use that construction.

Another possibility is that repetitiveness is particularly beneficial in situations with increased cognitive load. Indeed, it has been proposed that children resort to repetitiveness in language learning more so than adults specifically because they have reduced cognitive capacity (Hudson Kam and Newport, 2005; Schwab et al., 2018). Because code comprehension is challenging, repetitiveness may be extra beneficial for code compared to natural language. Finally, repetitiveness may arise from pedagogical choices, or from coding standards.

Threats We knowledge potential threats. Internal validity threats might arise from a few sources. First, our transforms must be sound. We carefully reviewed the code of and hand-checked a large sample of the results to ensure correctness and diminish the possibility of error. Second, while we primarily used lexical ngram based models, we validated the robustness of our corpus study with LSTM models on several examples. Moreover, for the small-scope localized transforms we use, we believe they are adequate, as such models have been used in prior work of this nature (Casalnuovo et al., 2018).

Regarding external validity, one issue is our choice of projects. We have chosen a reasonable sample of projects in two widely-used languages, and our results largely hold up. We believe it’s likely that our results will generalize to languages similar to Java and Python. It’s possible that other languages (e.g. Haskell) with tightly-knit, highly-skilled user-bases may behave differently. We have also only focused on small-scale transformations to expressions. It’s possible that that larger transforms may have different effects, and may require different modeling techniques as mentioned above. Finally, coding style guides can influence how code is written. We searched our projects for references to style guides and found several variants. Some projects had explicit style checks, and others were much looser292929For instance, Apache Tomcat.. Virtually all the guidelines were largely unrelated to our transformations, and more focused on naming & whitespace. We did find a few, limited references in Java to using parentheses as needed for clarity; and just one project specified that null values should come second, but otherwise nothing that would affect our transforms. Construct validity

: we measure prevalence using surprisal from language models. Language models are highly-refined methods for estimating occurrence frequency, and have proven value in natural language processing, so this is well-justified.

Actionability and Future Work: We acknowledge that our work thus far is more science than engineering, but it does have practical implications. While sound, meaning-preserving transforms aren’t realistic for natural language, they are for code! We confirm that surprisal of code is strongly associated with human preference, thus providing theoretical support for a tool that aims to rewrite code into a meaning-preserving form preferred by programmers. We plan to see if better neural models, such as the Transformer (Vaswani et al., 2017), align with human preference more strongly. Psycholinguistic experiments have demonstrated that natural language comprehension and production are robustly sensitive to surprisal across a wide range of measures (e.g. forced-choice preferences, reading times, production times, comprehension accuracy, neural measures, etc.) (Rayner and Well, 1996; Wells et al., 2009; Morgan and Levy, 2016; Oldfield and Wingfield, 1965; Kutas and Hillyard, 1984). In future work we plan to extend the human subjects work here to test whether ease of code comprehension and production similarly relates to surprisal.

6. Related Works

First, we briefly note the recent work by Rahman et al. claiming the greater repetitiveness of source code over English is diminished once syntactic tokens are removed (Rahman et al., 2019). They compared a fixed baseline of English without syntactic markers against code with and without these tokens. However, Casalnuovo et al. performed two pairwise English-Java corpus comparisons, with and without syntactic (closed-category) tokens. This more balanced comparison reveals that without the markers of syntax, the gap of predictability between code and English actually increases (Casalnuovo et al., 2018).

6.1. Program Understanding

While we draw theoretical inspiration from the “bimodality” of software (Barr, 2018; Allamanis et al., 2018), we also note that this theory is only a recent re-formulation of a much older idea. The idea of programs serving a dual purpose, for machine and human, is decades old (Brooks, 1978). In the study of program understanding, this connects to the ideas of top-down and bottom-up comprehension. Top-down comprehension arguably relates the human-human channel, where past experience guides a reader to seek out expected cues called beacons, that help her decipher the program’s meaning (Brooks, 1978, 1983). In contrast, bottom-up comprehension involves processing individual pieces of code, storing them in memory as semantic chunks and constructing the meaning out of these pieces (Pennington, 1987; Shneiderman and Mayer, 1979). This in some ways resembles the way a machine would process code, where understanding arises out of precise operational semantics.

Program understanding using fMRI303030Functional Magnetic Resonance Imaging. and eye-tracking as humans read programs has seen recent focus. A study by Seigmund et al. used fMRI to study both top-down and bottom-up comprehension in a programming environment (Siegmund et al., 2017). Using brain activation to measure “neural efficiency” (which associates lower brain activation with greater cognitive ease (Neubauer and Fink, 2009; Siegmund et al., 2017)), they find that top-down comprehension is more efficient than bottom-up comprehension. This supports the theory that the availability of highly probable “beacons” expected by humans facilitates code reading on approaches that rely on bottom-up construction of semantics.

Meanwhile, eye-tracking studies (see survey by Obaidellah et al.(Obaidellah et al., 2018)) also help explain how humans understand code and how they do so differently from natural language. For instance, studies show that while natural language follows a linear reading pattern (left to right in English), code readers jump around quite a bit, e.g. from a variable or function use to it’s declaration (Busjahn et al., 2015; Jbara and Feitelson, 2017). Indeed, expert readers tend to show more non-linear eye traces than novices (Jbara and Feitelson, 2017). These techniques have also been used together: Fritz et al. used eye-tracking in combination with EEG-based measures of electrical brain activity, to predict the difficulty of programming tasks (Fritz et al., 2014), and recent calls for similar studies combining eye and brain methods highlight their potential for understanding program comprehension (Peitek et al., 2018). Fakhoury et al. used fNIRS (similar to fMRI) and eye tracking (Fakhoury et al., 2019) to relate cognitive load to lexical but not structural anti-patterns in code, though both led to worse task performance. Finally, in natural language, surprisal is known to relate to eye movement and comprehension (Hale, 2001; Levy, 2008; Demberg and Keller, 2008; Frank, 2013; Levy, 2013); our work suggests that lower surprisal in code will also ease reading & comprehension, which we hope to pursue in future work.

Finally, we note that Gopstein et al. (Gopstein et al., 2017, 2018) found that style guides advocating for using only necessary curly braces aren’t empirically well founded; sometimes superfluous braces aid program understanding. This is consistent with our finding that parentheses are preferentially included to indicate evaluation order (even when not needed) to serve a similar role in segmenting expressions as curly braces do in control flow. Although the predictability of source code and human understanding are different metrics, our results on parentheses removal suggest a similar phenomenon - that developers use them sometimes to benefit from easier readability.

6.2. Generated vs. Stored Language

As we focused on repeated patterns used to express the same meaning, we highlight work on natural language exemplar theories. These exemplars are examples (at the word or phrase level) that are learned from usage and then generalized (Bybee, 2003; Hay and Bresnan, 2006). Importantly, some phrases are neither stored entirely in memory nor strictly generated from grammar, forming a category in between the two. Recent work by Morgan et al. examined this on order preferences in binomial expressions, such as bread and butter vs butter and bread, to see how frequency of usage and abstract linguistic preferences313131As an analogy, an abstract linguistic preference in code might be that variables go before constants, e.g., i+1 rather than 1+i. determined what humans prefer. They found that both effects played roles in preference, but frequency of expression overwhelms the effects of underlying preferences and codifies a norm (Morgan and Levy, 2016). We use similar human subject study, but leverage code’s structure to combine this with a natural experiment on code corpora.

We also briefly note a specific type of stored language, idioms, as they have been explored in software before. Idioms can quickly and efficiently convey a meaning for to those who know them (Schmitt and Carter, 2004). Idiomatic language has been mined from source code by Allamanis et al., finding syntactic fragments across programs that possess the same meaning (Allamanis and Sutton, 2014), though they focus on extracting them over studying controlled preferences as we do.

Finally, we briefly note recent work in approaches to generating program variants in mutation testing and program obfuscation. Mutation testing seeks to create semantically different programs to expose deficits in test suites (Papadakis et al., 2019). One relevant work has tried using language models to find natural mutants, finding that the mutants tended to be less natural (more improbable) but did not have success using the metric to guide mutation selection (Jimenez et al., 2018). Additionally, obfuscation transformations generally retains meaning, though they can produce different error behavior and run much slower (Collberg et al., 1997, 1998). Recent work (Liu et al., 2017) has used naturalness to combine obfuscation operators in a way to minimize the effectiveness of deobfuscation techniques learning patterns out of software (Raychev et al., 2015; Vasilescu et al., 2017). These approaches, however, are more focused on applications than understanding the decisions that inform source code choices.

7. Conclusion

Why is code so repetitive? Previous work strongly suggests that it is not merely the restricted syntax of programming languages; and it’s most definitely not because programming languages restrict the possible ways to express computations. In this study, we hypothesize that programmers prefer certain ways to write code. We model familiarity using a language model estimated over a large corpus, and measure the ”familiarity” of different ways writing code, while controlling for the meaning. We find that “familiar” forms are indeed more preferred by code writers, using surprisal, as scored by a language model, and also align with preferences of human readers via controlled experiment on Mechanical Turk. Finally, we draw connections between our work and the well-established theories from Psycholinguistics.


  • M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton (2018) A survey of machine learning for big code and naturalness. ACM Computing Surveys. Cited by: 1st item, §2, §2, §6.1.
  • M. Allamanis, E. T. Barr, C. Bird, and C. Sutton (2014) Learning natural coding conventions. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, New York, NY, USA, pp. 281–293. External Links: ISBN 978-1-4503-3056-5, Link, Document Cited by: §3.4.
  • M. Allamanis and C. Sutton (2014) Mining idioms from source code. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, New York, NY, USA, pp. 472–483. External Links: ISBN 978-1-4503-3056-5, Link, Document Cited by: §6.2.
  • M. Allamanis (2018) The adverse effects of code duplication in machine learning models of code. CoRR abs/1812.06469. External Links: Link, 1812.06469 Cited by: §3.1.
  • A. Alqaimi, P. Thongtanunam, and C. Treude (2019) Automatically generating documentation for lambda expressions in java. In Proceedings of the 16th International Conference on Mining Software Repositories, MSR ’19, Piscataway, NJ, USA, pp. 310–320. External Links: Link, Document Cited by: §3.4.
  • D. J. Barr, R. Levy, C. Scheepers, and H. J. Tily (2013) Random effects structure for confirmatory hypothesis testing: keep it maximal. Journal of Memory and Language 68 (3), pp. 255 – 278. External Links: ISSN 0749-596X, Document, Link Cited by: §4.2.
  • E. Barr (2018) Bimodal software engineering. Note: Machine Learning for Programming Workshop, Federated Logic Conference (FLOC) 2018 External Links: Link Cited by: 1st item, §2, §6.1.
  • K. Bock (1987) An effect of the accessibility of word forms on sentence structures. Journal of memory and language 26 (2), pp. 119–137. Cited by: §2.
  • R. Brooks (1978) Using a behavioral theory of program comprehension in software engineering. In Proceedings of the 3rd international conference on Software engineering, pp. 196–201. Cited by: §6.1.
  • R. Brooks (1983) Towards a theory of the comprehension of computer programs. International Journal of Man-Machine Studies 18 (6), pp. 543 – 554. External Links: ISSN 0020-7373, Document, Link Cited by: §6.1.
  • T. Busjahn, R. Bednarik, A. Begel, M. Crosby, J. H. Paterson, C. Schulte, B. Sharif, and S. Tamm (2015) Eye movements in code reading: relaxing the linear order. In Program Comprehension (ICPC), 2015 IEEE 23rd International Conference on, pp. 255–265. Cited by: §6.1.
  • J. Bybee (2003) Phonology and language use. Vol. 94, Cambridge University Press. Cited by: §6.2.
  • C. Casalnuovo, K. Sagae, and P. Devanbu (2018) Studying the difference between natural and programming language corpora. Empirical Software Engineering, pp. 1–46. Cited by: §1, §1, §2, §2, §5, §6.
  • D. Chen, K. T. Stolee, and T. Menzies (2019) Replication can improve prior results: a github study of pull request acceptance. In Proceedings of the 27th International Conference on Program Comprehension, ICPC ’19, Piscataway, NJ, USA, pp. 179–190. External Links: Link, Document Cited by: §3.4.
  • J. Cohen, P. Cohen, S. G. West, and L. S. Aiken (2003)

    Applied multiple correlation/regression analysis for the behavioral sciences

    UK: Taylor & Francis. Cited by: §3.5.
  • C. Collberg, C. Thomborson, and D. Low (1997) A taxonomy of obfuscating transformations. Technical report Department of Computer Science, The University of Auckland, New Zealand. Cited by: §4.1.2, §6.2.
  • C. Collberg, C. Thomborson, and D. Low (1998) Manufacturing cheap, resilient, and stealthy opaque constructs. In Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 184–196. Cited by: §6.2.
  • R. D. Cook and S. Weisberg (1982) Residuals and influence in regression. New York: Chapman and Hall. Cited by: §3.5.
  • M. Data (2018) The bot problem on mturk. Note: May 2019 Cited by: §3.4.
  • V. Demberg and F. Keller (2008) Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition 109 (2), pp. 193 – 210. External Links: ISSN 0010-0277, Document, Link Cited by: §3.2, §3.4, §6.1.
  • S. Depaoli and R. Van de Schoot (2017)

    Improving transparency and replication in bayesian statistics: the wambs-checklist.

    Psychological methods 22 (2), pp. 240. Cited by: §3.5.
  • S. Fakhoury, D. Roy, Y. Ma, V. Arnaoudova, and O. Adesope (2019) Measuring the impact of lexical and structural inconsistencies on developers’ cognitive load during bug localization. Empirical Software Engineering. External Links: ISSN 1573-7616, Document, Link Cited by: §6.1.
  • V. S. Ferreira and G. S. Dell (2000) Effect of ambiguity and lexical availability on syntactic and lexical production. Cognitive psychology 40 (4), pp. 296–340. Cited by: §2.
  • S. Frank (2013) Uncertainty reduction as a measure of cognitive load in sentence comprehension. Topics in Cognitive Science 5 (3), pp. 475–494. Cited by: §3.2, §3.4, §6.1.
  • T. Fritz, A. Begel, S. C. Müller, S. Yigit-Elliott, and M. Züger (2014) Using psycho-physiological measures to assess task difficulty in software development. In Proceedings of the 36th International Conference on Software Engineering, pp. 402–413. Cited by: §6.1.
  • D. Gopstein, J. Iannacone, Y. Yan, L. DeLong, Y. Zhuang, M. K. Yeh, and J. Cappos (2017) Understanding misunderstandings in source code. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 129–139. Cited by: §4.1.2, §6.1.
  • D. Gopstein, H. H. Zhou, P. Frankl, and J. Cappos (2018) Prevalence of confusing code in software projects: atoms of confusion in the wild. In MSR ’18: 15th International Conference on Mining Software Repositories, May 28–29, 2018, Gothenburg, Sweden, pp. 11 pages. External Links: Document Cited by: §4.1.2, §6.1.
  • J. Hale (2001) A probabilistic earley parser as a psycholinguistic model. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pp. 1–8. Cited by: §3.2, §3.4, §6.1.
  • J. Hay and J. Bresnan (2006) Spoken syntax: the phonetics of giving a hand in new zealand english. The Linguistic Review 23 (3), pp. 321–349. Cited by: §6.2.
  • V. J. Hellendoorn and P. Devanbu (2017)

    Are deep neural networks the best choice for modeling source code?

    In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE, pp. 763–773. Cited by: §1, §2, §3.2, §3.4.
  • A. Hindle, E. T. Barr, Z. Su, M. Gabel, and P. Devanbu (2012) On the naturalness of software. In Proceedings of the 34th International Conference on Software Engineering, ICSE ’12, Piscataway, NJ, USA, pp. 837–847. External Links: ISBN 978-1-4673-1067-3, Link Cited by: §1, §2, §2.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.
  • M. Hollander and D. A. Wolfe (1999) Nonparametric statistical methods. Cited by: §3.5.
  • C. L. Hudson Kam and E. L. Newport (2005) Regularizing unpredictable variation: the roles of adult and child learners in language formation and change. Language learning and development 1 (2), pp. 151–195. Cited by: §2, §5.
  • T. F. Jaeger and R. P. Levy (2007) Speakers optimize information density through syntactic reduction. In Advances in neural information processing systems, pp. 849–856. Cited by: §2.
  • A. Jbara and D. G. Feitelson (2017) How programmers read regular code: a controlled experiment using eye tracking. Empirical Software Engineering 22 (3), pp. 1440–1477. Cited by: §6.1.
  • M. Jimenez, T. T. Checkam, M. Cordy, M. Papadakis, M. Kintis, Y. L. Traon, and M. Harman (2018) Are mutants really natural?: a study on how naturalness helps mutant selection. In Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 3. Cited by: §6.2.
  • R. Just (2014) The major mutation framework: efficient and scalable mutation analysis for java. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, ISSTA 2014, New York, NY, USA, pp. 433–436. External Links: ISBN 978-1-4503-2645-2, Link, Document Cited by: §3.3.
  • M. Kutas and S. A. Hillyard (1984) Brain potentials during reading reflect word expectancy and semantic association. Nature 307 (5947), pp. 161. Cited by: §5.
  • R. Levy (2008) Expectation-based syntactic comprehension. Cognition 106 (3), pp. 1126 – 1177. External Links: ISSN 0010-0277, Document, Link Cited by: §3.2, §3.4, §6.1.
  • R. Levy (2013) Memory and surprisal in human sentence comprehension. In Sentence processing, pp. 90–126. Cited by: §3.2, §3.4, §6.1.
  • H. Liu, C. Sun, Z. Su, Y. Jiang, M. Gu, and J. Sun (2017) Stochastic optimization of program obfuscation. In Software Engineering (ICSE), 2017 IEEE/ACM 39th International Conference on, pp. 221–231. Cited by: §3.4, §6.2.
  • L. Madeyski and N. Radyk (2010) Judy–a mutation testing tool for java. IET software 4 (1), pp. 32–42. Cited by: §3.3.
  • M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini (1993) Building a large annotated corpus of english: the penn treebank. Comput. Linguist. 19 (2), pp. 313–330. External Links: ISSN 0891-2017, Link Cited by: §2.
  • E. Morgan and R. Levy (2016) Abstract knowledge versus direct experience in processing of binomial expressions. Cognition 157, pp. 384–402. Cited by: §1, §5, §6.2.
  • A. C. Neubauer and A. Fink (2009) Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews 33 (7), pp. 1004–1023. Cited by: §6.1.
  • A. T. Nguyen and T. N. Nguyen (2015) Graph-based statistical language model for code. In Proceedings of the 37th International Conference on Software Engineering - Volume 1, ICSE ’15, Piscataway, NJ, USA, pp. 858–868. External Links: ISBN 978-1-4799-1934-5, Link Cited by: §1.
  • U. Obaidellah, M. Al Haek, and P. C. Cheng (2018) A survey on the usage of eye-tracking in computer programming. ACM Computing Surveys (CSUR) 51 (1), pp. 5. Cited by: §6.1.
  • R. C. Oldfield and A. Wingfield (1965) Response latencies in naming objects. Quarterly Journal of Experimental Psychology 17 (4), pp. 273–281. Cited by: §5.
  • M. Papadakis, M. Kintis, J. Zhang, Y. Jia, Y. Le Traon, and M. Harman (2019) Mutation testing advances: an analysis and survey. In Advances in Computers, Vol. 112, pp. 275–378. Cited by: §6.2.
  • N. Peitek, J. Siegmund, C. Parnin, S. Apel, and A. Brechmann (2018) Toward conjoint analysis of simultaneous eye-tracking and fmri data for program-comprehension studies. In Proceedings of the Workshop on Eye Movements in Programming, pp. 1. Cited by: §6.1.
  • N. Pennington (1987) Stimulus structures and mental representations in expert comprehension of computer programs. Cognitive Psychology 19 (3), pp. 295 – 341. External Links: ISSN 0010-0285, Document, Link Cited by: §6.1.
  • G. A. A. Prana, C. Treude, F. Thung, T. Atapattu, and D. Lo (2019) Categorizing the content of github readme files. Empirical Software Engineering 24 (3), pp. 1296–1327. External Links: ISSN 1573-7616, Document, Link Cited by: §3.4.
  • M. Rahman, D. Palani, and P. Rigby (2019) Natural software revisited. In Proceedings, ICSE, Cited by: §6, footnote 3.
  • B. Ray, V. Hellendoorn, S. Godhane, Z. Tu, A. Bacchelli, and P. Devanbu (2016) On the ”naturalness” of buggy code. In Proceedings of the 38th International Conference on Software Engineering, ICSE ’16, New York, NY, USA, pp. 428–439. External Links: ISBN 978-1-4503-3900-1, Link, Document Cited by: §1.
  • V. Raychev, M. Vechev, and A. Krause (2015) Predicting program properties from ”big code”. In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’15, New York, NY, USA, pp. 111–124. External Links: ISBN 978-1-4503-3300-9, Link, Document Cited by: §1, §6.2.
  • K. Rayner and A. D. Well (1996) Effects of contextual constraint on eye movements in reading: a further examination. Psychonomic Bulletin & Review 3 (4), pp. 504–509. Cited by: §2, §5.
  • P. B. rkner (2017) brms: an R package for Bayesian multilevel models using Stan. Journal of Statistical Software 80 (1), pp. 1–28. External Links: Document Cited by: §3.5.
  • N. Schmitt and R. Carter (2004) Formulaic sequences in action. Formulaic sequences: Acquisition, processing and use, pp. 1–22. Cited by: §6.2.
  • J. F. Schwab, L. Casey, and A. E. Goldberg (2018) When regularization gets it wrong: children over-simplify language input only in production. Journal of child language 45 (5), pp. 1054–1072. Cited by: §5.
  • B. Shneiderman and R. Mayer (1979) Syntactic/semantic interactions in programmer behavior: a model and experimental results. International Journal of Computer & Information Sciences 8 (3), pp. 219–238. Cited by: §6.1.
  • J. Siegmund, N. Peitek, C. Parnin, S. Apel, J. Hofmeister, C. Kästner, A. Begel, A. Bethmann, and A. Brechmann (2017) Measuring neural efficiency of program comprehension. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 140–150. Cited by: §6.1.
  • M. Sundermeyer, R. Schlüter, and H. Ney (2012) LSTM neural networks for language modeling. In Thirteenth Annual Conference of the International Speech Communication Association, Cited by: §2.
  • Z. Tu, Z. Su, and P. Devanbu (2014) On the localness of software. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, New York, NY, USA, pp. 269–280. External Links: ISBN 978-1-4503-3056-5, Link, Document Cited by: §1, §2, §3.2.
  • B. Vasilescu, C. Casalnuovo, and P. Devanbu (2017) Recovering clear, natural identifiers from obfuscated js names. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 683–693. Cited by: §1, §6.2.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §5.
  • E. W. Weisstein (2004) Bonferroni correction. Cited by: §3.5.
  • J. B. Wells, M. H. Christiansen, D. S. Race, D. J. Acheson, and M. C. MacDonald (2009) Experience and sentence processing: statistical learning and relative clause comprehension. Cognitive psychology 58 (2), pp. 250–271. Cited by: §2, §5.