Analyzing and Supporting Adaptation of Online Code Examples

05/28/2019
by   Tianyi Zhang, et al.
University of California, Irvine
0

Developers often resort to online Q&A forums such as Stack Overflow (SO) for filling their programming needs. Although code examples on those forums are good starting points, they are often incomplete and inadequate for developers' local program contexts; adaptation of those examples is necessary to integrate them to production code. As a consequence, the process of adapting online code examples is done over and over again, by multiple developers independently. Our work extensively studies these adaptations and variations, serving as the basis for a tool that helps integrate these online code examples in a target context in an interactive manner. We perform a large-scale empirical study about the nature and extent of adaptations and variations of SO snippets. We construct a comprehensive dataset linking SO posts to GitHub counterparts based on clone detection, time stamp analysis, and explicit URL references. We then qualitatively inspect 400 SO examples and their GitHub counterparts and develop a taxonomy of 24 adaptation types. Using this taxonomy, we build an automated adaptation analysis technique on top of GumTree to classify the entire dataset into these types. We build a Chrome extension called ExampleStack that automatically lifts an adaptation-aware template from each SO example and its GitHub counterparts to identify hot spots where most changes happen. A user study with sixteen programmers shows that seeing the commonalities and variations in similar GitHub counterparts increases their confidence about the given SO example, and helps them grasp a more comprehensive view about how to reuse the example differently and avoid common pitfalls.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

10/03/2019

An Empirical Study of C++ Vulnerabilities in Crowd-Sourced Code Examples

Software developers share programming solutions in Q A sites like Stac...
02/16/2021

Automatic API Usage Scenario Documentation from Technical Q A Sites

The online technical Q A site Stack Overflow (SO) is popular among dev...
05/06/2022

Understanding Quantum Software Engineering Challenges An Empirical Study on Stack Exchange Forums and GitHub Issues

With the advance in quantum computing, quantum software becomes critical...
06/12/2020

Task-Oriented API Usage Examples Prompting Powered By Programming Task Knowledge Graph

Programming tutorials are often created to demonstrate programming tasks...
02/08/2018

Usage and Attribution of Stack Overflow Code Snippets in GitHub Projects

Stack Overflow (SO) is the largest Q&A website for software developers, ...
03/29/2022

Demystifying Software Release Note Issues on GitHub

Release notes (RNs) summarize main changes between two consecutive softw...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Nowadays, a common way of quickly accomplishing programming tasks is to search and reuse code examples in online Q&A forums such as Stack Overflow (SO) [1, 2, 3]. A case study at Google shows that developers issue an average of twelve code search queries per weekday [4]. As of July 2018, Stack Overflow has accumulated 26M answers to 16M programming questions. Copying code examples from Stack Overflow is common [5] and adapting them to fit a target program is recognized as a top barrier when reusing code from Stack Overflow [6]. SO examples are created for illustration purposes, which can serve as a good starting point. However, these examples may be insufficient to be ported to a production environment, as previous studies find that SO examples may suffer from API usage violations [7], insecure coding practices [8], unchecked obsolete usage [9], and incomplete code fragments [10]. Hence, developers may have to manually adapt code examples when importing them into their own projects.

Our goal is to investigate the common adaptation types and their frequencies in online code examples, such as those found in Stack Overflow, which are used by a large number of software developers around the world. To study how they are adopted and adapted in real projects, we contrast them against similar code fragments in GitHub projects. The insights gained from this study could inform the design of tools for helping developers adapt code snippets they find in Q&A sites. In this paper, we describe one such tool we developed, ExampleStack, which works as a Chrome extension.

In broad strokes, the design and main results of our study are as follows. We link SO examples to GitHub counterparts using multiple complementary filters. First, we quality-control GitHub data by removing forked projects and selecting projects with at least five stars. Second, we perform clone detection [11] between 312K SO posts and 51K non-forked GitHub projects to ensure that SO examples are similar to GitHub counterparts. Third, we perform timestamp analysis to ensure that GitHub counterparts are created later than the SO examples. Fourth, we look for explicit URL references from GitHub counterparts to SO examples by matching the post ID. As the result, we construct a comprehensive dataset of variations and adaptations.

When we use all four filters above, we find only 629 SO examples with GitHub counterparts. Recent studies find that very few developers explicitly attribute to the original SO post when reusing code from Stack Overflow [5, 12, 6]. Therefore, we use this resulting set of 629 SO examples as an under-approximation of SO code reuse and call it an adaptations dataset. If we apply only the first three filters above, we find 14,124 SO examples with GitHub counterparts that represent potential code reuse from SO to GitHub. While this set does not necessarily imply any causality or intentional code reuse, it still demonstrates the kinds of common variations between SO examples and their GitHub counterparts, which developers might want to consider during code reuse. Therefore, we consider this second dataset as an over-approximation of SO code reuse, and call it simply a variations dataset.

We randomly select 200 clone pairs from each dataset and manually examine the program differences between SO examples and their GitHub counterparts. Based on the manual inspection insights, we construct an adaptation taxonomy with 6 high-level categories and 24 specialized types. We then develop an automated adaptation analysis technique built on top of GumTree [13]

to categorize syntactic program differences into different adaptation types. The precision and recall of this technique are 98% and 96% respectively. This technique allows us to quantify the extent of common adaptations and variations in each dataset. The analysis shows that both the adaptations and variations between SO examples and their GitHub counterparts are prevalent and non-trivial. It also highlights several adaptation types such as type conversion, handling potential exceptions, and adding

if checks, which are frequently performed yet not automated by existing code integration techniques [14, 15].

Building on this adaptation analysis technique, we develop a Chrome extension called ExampleStack to guide developers in adapting and customizing online code examples to their own contexts. For a given SO example, ExampleStack shows a list of similar code snippets in GitHub and also lifts an adaptation-aware template from those snippets by identifying common, unchanged code, and also the hot spots where most changes happen. Developers can interact and customize these lifted templates by selecting desired options to fill in the hot spots. We conduct a user study with sixteen developers to investigate whether ExampleStack inspires them with new adaptations that they may otherwise ignore during code reuse. Our key finding is that participants using ExampleStack focus more on adaptations about code safety (e.g., adding an if check) and logic customization, while participants without ExampleStack make more shallow adaptations such as variable renaming. In the post survey, participants find ExampleStack help them easily reach consensus on how to reuse a code example, by seeing the commonalities and variations between the example and its GitHub counterparts. Participants also feel more confident after seeing how other GitHub developers use similar code in different contexts, which one participant describes as “asynchronous pair programming.”

In summary, this work makes the following contributions:

  • It makes publicly available a comprehensive dataset of adaptations and variations between SO and GitHub.111Our dataset and tool are available at https://github.com/tianyi-zhang/ExampleStack-ICSE-Artifact The adaptation dataset includes 629 groups of GitHub counterparts with explicit references to SO posts, and the variation dataset includes 14,124 groups. These datasets are created with care using multiple complementary methods for quality control—clone detection, time stamp analysis, and explicit references.

  • It puts forward an adaptation taxonomy of online code examples and an automated technique for classifying adaptations. This taxonomy is sufficiently different from other change type taxonomies from refactoring [16] and software evolution [17, 18], and it captures the particular kinds of adaptations done over online code examples.

  • It provides browser-based tool support, called ExampleStack that displays the commonalities and variations between a SO example and its GitHub counterparts along with their adaptation types and frequencies. Participants find that seeing GitHub counterparts increases their confidence of reusing a SO example and helps them understand different reuse scenarios and corner cases.

The rest of the paper is organized as follows. Section II describes the data collection pipeline and compares the characteristics of the two datasets. Section III describes the adaptation taxonomy development and an automated adaptation analysis technique. Section IV describes the quantitative analysis of adaptations and variations. Section V explains the design and implementation of ExampleStack. Section VI describes a user study that evaluates the usefulness of ExampleStack. Section VII discusses threats to validity. Section VIII discusses related work, and Section IX concludes the paper.

Ii Linking Stack Overflow to GitHub

This section describes the data collection pipeline. Due to the large portion of unattributed SO examples in GitHub [5, 6, 12], it is challenging to construct a complete set of reused code from SO to GitHub. To overcome this limitation, we apply four quality-control filters to underapproximate and overapproximate code examples reused from SO to GitHub, resulting in two complementary datasets.

(a) Examples with different numbers of clones (b) Examples with different code sizes (c) Examples with different vote scores
Fig. 1: Comparison between SO examples in the adaptation dataset and the variation dataset

GitHub project selection and deduplication. Since GitHub has many toy projects that do not adequately reflect software engineering practices [19], we only consider GitHub projects that have at least five stars. To account for internal duplication in GitHub [20], we choose non-fork projects only and further remove duplicated GitHub files using the same file hashing method as in [20]

, since such file duplication may skew our analysis. As a result, we download 50,826 non-forked Java repositories with at least five stars from GitTorrent 

[21]. After deduplication, 5,825,727 distinct Java files remain.

Detecting GitHub candidates for SO snippets. From the SO dump taken in October 2016 [22], we extract 312,219 answer posts that have java or android tags and also contain code snippets in the <code> markdown. We consider code snippets in answer posts only, since snippets in question posts are rarely used as examples. Then we use a token-based clone detector, SourcererCC (SCC) [11] to find similar code between 5.8M distinct Java files and 312K SO posts. We choose SCC because it has high precision and recall and also scales to a large code corpus. Since SO snippets are often free-standing statements [23, 24], we parse and tokenize them using a customized Java parser [25]. Prior work finds that larger SO snippets have more meaningful clones in GitHub [26]. Hence, we choose to study SO examples with no less than 50 tokens, not including code comments, Java keywords, and delimiters. We set the similarity threshold to 70% since it yields the best precision and recall on multiple clone benchmarks [11]. We cannot set it to 100% since SCC will then only retain exact copies and exclude those adapted code. We run SCC on a server machine with 116 cores and 256G RAM. It takes 24 hours to complete, resulting in 21,207 SO methods that have one or more similar code fragments (i.e., clones) in GitHub.

Timestamp analysis. If the GitHub clone of a SO example is created before the SO post, we consider it unlikely to be reused from SO and remove it from our dataset. To identify the creation date of a GitHub clone, we write a script to retrieve the Git commit history of the file and match the clone snippet against each file revision. We use the timestamp of the earliest matched file revision as the creation time of a GitHub clone. As a result, 7,083 SO examples (33%) are excluded since all their GitHub clones are committed before the SO posts.

Scanning explicitly attributed SO examples. Despite the large portion of unattributed SO examples, it is certainly possible to scan GitHub clones for explicit references such as SO links in code comments to confirm whether a clone is copied from SO. If the SO link in a GitHub clone points to a question post instead of an answer post, we check whether the corresponding SO example is from any of its answer posts by matching the post ID. We find 629 explicitly referenced SO examples.

Overapproximating and underapproximating reused code. We use the set of 629 explicitly attributed SO examples as an underapproximation of reused code from SO to GitHub, which we call an adaptation dataset. We consider the 14,124 SO examples after timestamp analysis as an overapproximation of potentially reused code, which we call a variation dataset. Figure 1 compares the characteristics of these two datasets of SO examples in terms of the number of GitHub clones, code size, and vote score (i.e., upvotes minus downvotes). Since developers do not often attribute SO code examples, explicitly referenced SO examples have a median of one GitHub clone only, while SO examples have a median of two clones in the variation dataset. Both sets of SO examples have similar length, 26 vs. 25 lines of code in median. However, SO examples from the adaptation dataset have significantly more upvotes than the variation dataset: 16 vs. 1 in median. In the following sections, we inspect, analyze, and quantify the adaptations and variations evidenced by both datasets.

Category Adaptation Type Rule
Code Hardening Add a conditional Insert(, , ) NodeType(, IfStatement)
Insert a final modifier Insert(, , ) NodeType(, Modifier) NodeValue(, final)
Handle a new exception type Exception(e, GH) Exception(e, SO)
Clean up unmanaged resources (e.g. close a stream) (LocalCall(m, GH) InstanceCall(m, GH)) LocalCall(m, SO) InstanceCall(m, SO) isCleanMethod(m)
Resolve Compilation
Errors
Declare an undeclared variable Insert(, , ) NodeType(, VariableDeclaration) NodeValue(, v) Use(v, SO) Def(v, SO)
Specify a target of method invocation InstanceCall(m, GH) LocalCall(m, SO)
Remove undeclared variables or local method calls (Use(v, SO) Def(v, SO) Use(v, GH)) (LocalCall(m, SO) LocalCall(m, GH) InstanceCall(m, GH))
Exception Handling Insert/delete a try-catch block (Insert(, , ) Delete()) NodeType(, TryStatement)
Insert/delete a thrown exception in a method header
Changed() NodeType(, Type) Parent(, ) NodeType(, MethodDeclaration) NodeValue(, t)
isExceptionType(t)
Update the exception type
Update(, ) NodeType(, SimpleType) NodeType(, SimpleType) NodeValue(, )
isExceptionType() NodeValue(, ) isExceptionType()
Change statements in a catch block Changed() Ancestor(, ) NodeType(, CatchClause)
Change statements in a finally block Changed() Ancestor(, ) NodeType(, FinallyBlock)
Logic Customization Change a method call Changed() Ancestor(, ) NodeType(, MethodInvocation)
Update a constant value Update(, ) NodeType(, Literal) NodeType(, Literal)
Change a conditional expression
Changed() Ancestor(, )
(NodeType(, IfCondition) NodeType(, LoopCondition) NodeType(, SwitchCase))
Change the type of a variable Update(, ) NodeType(, Type) NodeType(, Type)
Refactoring Rename a variable/field/method Update(, ) NodeType(, Name)
Replace hardcoded constant values with variables Delete() NodeType(, Literal) Insert(, , ) NodeType(, Name) Match(, )
Inline a field Delete() NodeType(, Name) Insert(, , ) NodeType(, Literal) Match(, )
Miscellaneous Change access modifiers Changed() NodeType(, Modifier) NodeValue(, v) v {private, public, protected, static}
Change a log/print statement Changed() NodeType(, MethodInvocation) NodeValue(, m) isLogMethod(m)
Style reformatting (i.e., inserting/deleting curly braces) Changed() NodeType(, Block) Parent(, ) Changed() Child(, ) Changed()
Change Java annotations Changed() NodeType(, Annotation)
Change code comments Changed() NodeType(, Comment)

GumTree Edit Operation Syntactic Predicate Semantic Predicate
Insert(, , ) inserts a new tree node as the -th
child of in the AST of a GitHub snippet.
NodeType(, ) checks if the node type of is .
Exception(, ) checks if is an exception caught in a catch
clause or thrown in a method header in program .
NodeValue(, ) checks if the corresponding source code
of node is .
LocalCall(, ) checks if is a local method call in program .
Delete() removes the tree node from the AST of a
SO example.
Match(, ) checks if and are matched based on
surrounding nodes regardless of node types.
InstanceCall(, ) checks if is an instance call in program .
Parent(, ) checks if is the parent of in the AST. Def(, ) checks if variable is defined in program .
Update(, ) updates the tree node in a SO
example with in the GitHub counterpart.
Ancestor(, ) checks if is the ancestor of in the AST. Use(, ) checks if variable is used in program .
Child(, ) checks if is the child of . IsExceptionType() checks if contains “Exception”.
Move(, , ) moves an existing node in the
AST of a SO example as the -th child of in
the GitHub counterpart.
Changed() is a shorthand for Insert(, , ) Delete()
Update(, ) Move(, ), which checks any
edit operation on .
IsLogMethod() checks if is one of the predefined log methods,
e.g., log, println, error, etc.
IsCleanMethod() checks if is one of the predefined resource
clean-up methods, e.g., close, recycle, dispose, etc.
TABLE I: Common adaptation types, categorization, and implementation

Iii Adaptation Type Analysis

Iii-a Manual Inspection

To get insights into adaptations and variations of SO examples, we randomly sample SO examples and their GitHub counterparts from each dataset and inspect their program differences using GumTree [13]. Below, we use “adaptations” to refer both adaptations and variations for simplicity.

The first and the last authors jointly labeled these SO examples with adaptation descriptions and grouped the edits with similar descriptions to identify common adaptation types. We initially inspected 90 samples from each dataset and had already observed convergent adaptation types. We continued to inspect more and stopped after inspecting 200 samples from each dataset, since the list of adaptation types was converging. This is a typical procedure in qualitative analysis [27]. The two authors then discussed with the other authors to refine the adaptation types. Finally, we built a taxonomy of 24 adaptation types in 6 high-level categories, as shown in Table I.

Code Hardening. This category includes four adaptation types that strengthen SO examples in a target project. Insert a conditional adds an if statement that checks for corner cases or protects code from invalid input data such as null or an out-of-bound index. Insert a final modifier enforces that a variable is only initialized once and the assigned value or reference is never changed, which is generally recommended for clear design and better performance due to static inlining. Handle a new exception improves the reliability of a code example by handling any missing exceptions, since exception handling is often omitted in examples in SO [7]. Clean up unmanaged resources helps release unneeded resources such as file streams and web sockets to avoid resource leaks [28].

Resolve Compilation Errors. SO examples are often incomplete with undefined variables and method calls [29, 24]. Declare an undeclared variable inserts a statement to declare an unknown variable. Specify a target of method invocation resolves an undefined method call by specifying the receiver object of that call. In an example about getting CPU usage [30], one comment complains the example does not compile due to an unknown method call, getOperatingSystemMXBean. Another suggests to preface the method call with an instance, ManagementFactory, which is also evidenced by its GitHub counterpart [31]. Sometimes, statements that use undefined variables and method calls are simply deleted.

Exception Handling. This category represents changes of the exception handling logic in catch/finally blocks and throws clauses. One common change is to customize the actions in a catch block, e.g., printing a short error message instead of the entire stack trace. Some developers handle exceptions locally rather than throwing them in method headers. For example, while the SO example [32] throws a generic Exception in the addLibraryPath method, its GitHub clone [33] enumerates all possible exceptions such as SecurityException and IllegalArgumentException in a try-catch block. By contrast, propagating the exceptions to upstream by adding throws in the method header is another way to handle the exceptions.

Logic Customization. Customizing the functionality of a code example to fit a target project is a common and broad category. We categorize logic changes to four basic types. Change a method call includes any edits in a method call, e.g., adding or removing a method call, changing its arguments or receiver, etc. Update a constant value changes a constant value such as the thread sleep time to another value. Change a conditional expression includes any edits on the condition expression of an if statement, a loop, or a switch case.

Update a type name replaces a variable type or a method return type with another type. For example, String and StringBuffer appear in multiple SO examples, and a faster type, StringBuilder, is used in their GitHub clones instead. Such type replacement often involves extra changes such as updating method calls to fit the replaced type or adding method calls to convert one type to another. For example, instead of returning InetAddress in a SO example [34], its GitHub clone [35] returns String and thus converts the IP address object to its string format using a new Formatter API.

(a) Distribution of AST edits (b) Code size vs. AST edits (c) Vote score vs. AST edits
Fig. 2: Code size (LOC) and vote scores on the number of AST edits in a SO example

Refactoring. 31% of inspected GitHub counterparts use a method or variable name different from the SO example. Instead of slider in a SO example [36], timeSlider is used in one GitHub counterpart [37] and volumnSlider is used in another counterpart [38]. Because SO examples often use hardcoded constant values for illustration purposes, GitHub counterparts may use variables instead of hardcoded constants. However, sometimes, a GitHub counterpart such as [39] does the opposite by inlining the values of two constant fields, BUFFER_SIZE and KB, since these fields do not appear along with the copied method, downloadWithHttpClient [40].

Miscellaneous. Adaptation types in this category do not have a significant impact on the reliability and functionality of a SO example. However, several interesting cases are still worth noting. In 91 inspected examples, GitHub counterparts include comments to explain the reused code. Sometimes, annotations such as @NotNull or @DroidSafe appear in GitHub counterparts to document the constraints of code.

Iii-B Automated Adaptation Categorization

Based on the manual inspection, we build a rule-based classification technique that automatically categorizes AST edit operations generated by GumTree to different adaptation types. GumTree supports four edit operations—insert, delete, update, and move, described in Column GumTree Edit Operation in Table I. Given a set of AST edits, our technique leverages both syntactic and semantic rules to categorize the edits to 24 adaptation types. Column Rule in Table I describes the implementation logic of categorizing each adaptation type.

Syntactic-based Rules. 16 adaptation types are detected based on syntactic information, e.g., edit operation types, AST node types and values, etc. Column Syntactic Predicate defines such syntactic information, which is obtained using the built-in functions provided by GumTree. For example, the rule insert a final modifier checks for an edit operation that inserts a Modifier node whose value is final in a GitHub clone.

Semantic-based Rules. 8 adaptation types require leveraging semantic information to be detected (Column Semantic Predicate). For example, the rule declare an undeclared variable checks for an edit operation that inserts a VariableDeclaration node in the GitHub counterpart and the variable name is used but not defined in the SO example. Our technique traverses ASTs to gather such semantic information. For example, our AST visitor keeps track of all declared variables when visiting a VariableDeclaration AST node, and all used variables when visiting a Name node.

Iii-C Accuracy of Adaptation Categorization

We randomly sampled another 100 SO examples and their GitHub clones to evaluate our automated categorization technique. To reduce bias, the second author who was not involved in the previous manual inspection labeled the adaptation types in this validation set. The ground truth contains 449 manually labeled adaptation types in 100 examples. Overall, our technique infers 440 adaptation types with 98% precision and 96% recall. In 80% of SO examples, our technique infers all adaptation types correctly. In another 20% of SO examples, it infers some but not all expected adaptation types.

Our technique infers incorrect or missing adaptation types for two main reasons. First, our technique only considers 24 common adaptation types in Table I but does not handle infrequent ones such as refactoring using lambda expressions and rewriting ++i to i++. Second, GumTree may generate sub-optimal edit scripts with unnecessary edit operations in about 5% of file pairs, according to [13]. In such cases, our technique may mistakenly report incorrect adaptation types.

(a) Adaptations: 629 explicitly attributed SO examples (b) Variations: 14,124 potentially reused SO examples
Fig. 3: Frequencies of categorized adaptation types in two datasets

Iv Empirical Study

Iv-a How many edits are potentially required to adapt a SO example?

We apply the adaptation categorization technique to quantify the extent of adaptions and variations in the two datasets. We measure AST edits between a SO example and its GitHub counterpart. If a SO code example has multiple GitHub counterparts, we use the average number. Overall, 13,595 SO examples (96%) in the variation dataset include a median of 39 AST edits (mean 47). 556 SO examples (88%) in the adaptation dataset include a median of 23 AST edits (mean 33). Figure 1(a) compares the distribution of AST edits in these two datasets. In both datasets, most SO examples have variations from their counterparts, indicating that integrating them to production code may require some type of adaptations.

Figure 1(b) shows the median number of AST edits in SO examples with different lines of code. We perform a non-parametric local regression [41] on the example size and the number of AST edits. As shown by the two lines in Figure 1(b), there is a strong positive correlation between the number of AST edits and the SO example size in both datasets—long SO examples have more adaptations than short examples.

Stack Overflow users can vote a post to indicate the applicability and usefulness of the post. Therefore, votes are often considered as the main quality metric of SO examples [42]. Figure 1(c) shows the median number of AST edits in SO examples with different vote scores. Although the adaptation dataset has significantly higher votes than the variation dataset (Figure 0(c)), there is no strong positive or negative correlation between the AST edit and the vote score in both sets. This implies that highly voted SO examples do not necessarily require fewer adaptations than those with low vote scores.

Iv-B What are common adaptation and variation types?

Figure 3 compares the frequencies of the 24 categorized adaptation types (Column Adaptation Type in Table I) for the adaptation and variation datasets. If a SO code example has multiple GitHub counterparts, we only consider the distinct types among all GitHub counterparts to avoid the inflation caused by repetitive variations among different counterparts. The frequency distribution is consistent in most adaptation types between the two datasets, indicating that variation patterns resemble adaptation patterns. Participants in the user study (Section VI) also appreciate being able to see variations in similar GitHub code, since “it highlights the best practices followed by the community and prioritizes the changes that I should make first,” as P5 explained.

Fig. 4: In the lifted template, common unchanged code is retained, while adapted regions are abstracted with hot spots.

In both datasets, the most frequent adaptation type is change a method call in the logic customization category. Other logic customization types also occur frequently. This is because SO examples are often designed for illustration purposes with contrived usage scenarios and input data, and thus require further logic customization. Rename is the second most common adaptation type. It is frequently performed to make variable and method names more readable for the specific context of a GitHub counterpart. 35% and 14% of SO examples in the variation dataset and the adaptation dataset respectively include undefined variables or local method calls, leading to compilation errors. The majority of these compilation errors (60% and 61% respectively) could be resolved by simply removing the statements using these undefined variables or method calls. 34% and 22% of SO examples in the two datasets respectively include new conditionals (e.g., an if check) to handle corner cases or reject invalid input data.

To understand whether the same type of adaptations appears repetitively on the same SO example, we count the number of adaptation types shared by different GitHub counterparts. Multiple clones of the same SO example share at least one same adaptation type in the 70% of the adaptation dataset and 74% of the variation dataset. In other words, the same type of adaptations is recurring among different GitHub counterparts.

V Tool Support and Implementation

Based on insights of the adaptation analysis, we build a Chrome extension called ExampleStack that visualizes similar GitHub code fragments alongside a SO code example and allows a user to explore variations of the SO example in an adaptation-aware code template.

V-a ExampleStack Tool Features

Suppose Alice is new to Android and she wants to read some json data from the asset folder of her Android application. Alice finds a SO code example [43] that reads geometric data from a specific file, locations.json (① in Figure 4). ExampleStack helps Alice by detecting other similar snippets in real-world Android projects and by visualizing the hot spots where adaptations and variations occur.

Browse GitHub counterparts with differences. Given the SO example, ExampleStack displays five similar GitHub snippets and highlights their variations to the SO example (③ in Figure 4). It also surfaces the GitHub link and reputation metrics of the GitHub repository, including the number of stars, contributors, and watches (④ in Figure 4). By default, it ranks GitHub counterparts by the number of stars.

View hot spots with code options. ExampleStack lifts a code template to illuminate unchanged code parts, while abstracting modified code as hot spots to be filled in (② in Figure 4). The lifted template provides a bird’s-eye view and serves as a navigation model to explore a variety of code options used to customize the code example. In Figure 5, Alice can click on each hot spot and view the code options along with their frequencies in a drop-down menu. Code options are highlighted in six distinct colors according to their underlying adaptation intent (⑦ in Figure 4). For example, the second drop-down menu in Figure 5 indicates that two GitHub snippets replace locations.json to languages.json to read the language asset resources for supporting multiple languages. This variation is represented as update a constant value in the logic customization category.

Fig. 5: Alice can click on a hot spot and view potential code options colored based on their underlying adaptation type.

Fill in hot spots with auto-selection. Instead of hardcoding the asset file name, Alice wants to make her program more general—being able to read asset files with any given file name. Therefore, Alice selects the code option, jsonFileName, in the second drop-down menu in Figure 5, which generalizes the hardcoded file name to a variable. ExampleStack automatically selects another code option, String jsonFileName, in the first drop-down menu in Figure 5, since this code option declares the jsonFileName variable as the method parameter. This auto-selection feature is enabled by def-use analysis, which correlates code options based on the definitions and uses of variables (Section V-B). By automatically relating code options in a template, Alice does not have to manually click through multiple drop-down menus to figure out how to avoid compilation errors. Figure 6 shows the customized template based on the selected jsonFileName option. The list of GitHub counterparts and the frequencies of other code options are also updated accordingly based on user selection. Alice can undo the previous selection (⑤ in Figure 4) or copy the customized template to her clipboard (⑥ in Figure 4).

Fig. 6: ExampleStack automatically updates the code template based on user selection.

V-B Template Construction

Diff generating and pruning. To lift an adaptation-aware code template of a SO code example, ExampleStack first computes the AST differences between the SO example and each GitHub clone using GumTree. ExampleStack prunes the edit operations by filtering out inner operations that modify the children of other modified nodes. For example, if an insert operation inserts an AST node whose parent is also inserted by another insert, the first inner insert will be removed, since its edit is entailed by the second outer insert. Given the resulting tree edits, ExampleStack keeps track of the change regions in the SO example and how each region is changed.

Diff grouping. ExampleStack groups change regions to decide where to place hot spots in a SO example and what code options to display in a hot spot. If two change regions are the same, they are grouped together. If two change regions overlap, ExampleStack merges the overlapping change locations into a bigger region enclosing both and groups them together. For example, consider a diff that changes a=b to a=b+c, and another diff that completely changes a=b to o.foo(). Simply abstracting the changed code in these two diffs without any alignment will overlay two hot spots in the template, a=b and the smaller diff is shadowed by the bigger diff in visualization. ExampleStack avoids this conflict by re-calibrating the first change region from a=b to a=b.

Option generating and highlighting. For each group of change regions, ExampleStack replaces the corresponding location in the SO example with a hot spot and attaches a drop-down menu. ExampleStack displays both the original content in the SO example and contents of the matched GitHub snippet regions as options in each drop-down menu. ExampleStack then uses the adaptation categorization technique to detect the underlying adaptation types of code options. We use six distinct background colors to illuminate the categories in Table I, which makes it easier for developers to recognize different intent. The color scheme is generated using ColorBrewer [44] to ensure the primary visual differences between different categories in the template.

ExampleStack successfully lifts code templates in all 14,124 SO examples. On average, a lifted template has 81 lines of code (median 41) with 13 hot spots (median 12) to fill in. On average, 4 code options (median 2) is displayed in the drop-down menu of each hot spot.

ID Desired Function & SO Example LOC Clone# Control Experiment
Assignment Adaptation Time(s) Assignment Adaptation Time(s)
Task I
Calculate the geographic distance
between two GPS coordinates [45]
12 2 P5-A refactor(5), logic(1) 458 P2-A harden(1), logic(1), misc(2) 870
P7-A refactor(1), logic(2), misc(1) 900 P3-B refactor(6), logic(4), misc(3) 900
P12-B refactor(2), harden(1) 900 P10-B refactor(5), logic(2), misc(1) 366
P16-B refactor(7) 727 P15-A refactor(10), logic(14), misc(3) 842
Task II
get the relative path between
two files [46]
74 2 P3-A refactor(5), logic(1), exception(2), misc(3) 900 P1-B refactor(3), harden(1), logic(2) 640
P8-A harden(1) 900 P6-A harden(4), logic(3) 900
P11-B none 621 P9-A harden(4), logic(2) 900
P15-B
refactor(13), harden(1), logic(5),
exception(1), misc(1)
863 P13-B
refactor(3), logic(2), exception(1),
misc(1)
900
Task III
encode a byte array to a
hexadecimal string [47]
12 17 P1-A refactor(5), harden(1) 652 P4-A refactor(5), harden(1), misc(1) 667
P6-B refactor(1), misc(1) 900 P8-B refactor(2), harden(1), misc(2) 548
P9-B harden(1), logic(1) 635 P12-A refactor(3), harden(2), misc(1) 748
P13-A refactor(3), misc(1) 900 P14-B refactor(3), harden(1), misc(1) 700
Task IV
add animation to an Android
view [48]
29 4 P2-B refactor(3), logic(1) 441 P5-B refactor(1), logic(3) 478
P4-B refactor(1), compile(1), misc(1) 900 P7-B refactor(2), compile(3), logic(3) 887
P10-A refactor(3), logic(5) 900 P11-A refactor(1), logic(3) 617
P14-A refactor(2), logic(4) 862 P16-A refactor(6), logic(4), misc(1) 773
TABLE II: Code reuse tasks and user study results

Vi User Study

We conducted a within-subjects user study with sixteen Java programmers to evaluate the usefulness of ExampleStack. We emailed students in a graduate-level Software Engineering class and research labs in the CS department at UCLA. We did a background survey and excluded volunteers with no Java experience, since our study tasks required users to read code examples in Java. Fourteen participants were graduate students and two were undergraduate students. Eleven participants had two to five years of Java experience, while the other five were novice programmers with one-year Java experience, showing a good mix of different levels of Java programming experience.

In each study session, we first gave a fifteen-minute tutorial of our tool. Participants then did two code reuse tasks with and without ExampleStack. When not using our tool (i.e., the control condition), participants were allowed to search online for other code examples, which is commonly done in real-world programming workflow [3]. To mitigate learning effects, the order of assigned conditions and tasks were counterbalanced across participants through random assignment. In each task, we asked participants to mark which parts of a SO code example they would like to change and explain how they would change. We did not require participants to fully integrate a code example to a target program or make it compile, since our goal was to investigate whether ExampleStack could inspire developers with new adaptations that they may otherwise ignore, rather than automated code integration. Each task was stopped after fifteen minutes. At the end, we did a post survey to solicit feedback.

Table II describes the four code reuse tasks and also the user study results. Column Assignment in each condition shows the participant ID and the task order. “P5-A” means the task was done by the fifth participant as her first task. Column Adaptation shows the number of different types of adaptations each participant made. Overall, participants using ExampleStack made three times more code hardening adaptations (15 vs. 5) and twice more logic customization adaptations (43 vs. 20), considering more edge cases and different usage scenarios. For instance, in Task III, all users in the experimental group added a null check for the input byte array after seeing other GitHub examples, while only one user in the control group did so. P14 wrote, “I would have completely forgotten about the null check without seeing it in a couple of examples.” On average, participants using ExampleStack made more adaptations (8.0 vs. 5.5) in more diverse categories (2.8 vs. 2.2). Wilcoxon signed-rank tests indicate that the mean differences in adaptation numbers and categories are both statistically significant (p=0.042 and p=0.009). We do not argue that making more adaptations are always better. Instead, we want to emphasize that, by seeing commonalities and variations in similar GitHub code, participants focus more on code safety and logic customization, instead of making shallow adaptations such as variable renaming only. The average task completion time is 725 seconds (SD=186) and 770 seconds (SD=185) with and without ExampleStack. We do not claim ExampleStack saves code reuse time, since it is designed as an informative tool when developers browse online code examples, rather than providing direct code integration support in an IDE. Figure 7 shows the code templates generated by ExampleStack, not including the one in Task II due to its length (79 lines).

How do you like or dislike viewing similar GitHub code alongside a SO example? In the post survey, all participants found it very useful to see similar GitHub code for three main reasons. First, viewing the commonality among similar code examples helped users quickly understand the essence of a code example. P6 described this as “the fast path to reach consensus on a particular operation.” Second, the GitHub variants reminded users of some points they may otherwise miss. Third, participants felt more confident of a SO example after seeing how similar code was used in GitHub repositories. P9 stated that, “[it is] reassuring to know that the same code is used in production systems and to know the common pitfalls.

How do you like or dislike interacting with a code template? Participants liked the code template, since it showed the essence of a code example and made it easier to see subtle changes, especially in lengthy code examples. Participants also found displaying the frequency count of different adaptations very useful. P5 explained, “it highlights the best practices followed by the community and also prioritizes the changes that I should make first.” However, we also observed that, when there were only a few GitHub counterparts, some participants inspected individual GitHub counterparts directly rather than interacting with the code template.

How do you like or dislike color-coding different adaptation types? Though the majority of participants confirmed the usefulness of this feature, six participants felt confused or distracted by the color scheme, since it was difficult to remember these colors during navigation. Three of them considered some adaptations (e.g., renaming) trivial and suggested to allow users to hide adaptations of no interest to avoid distraction.

When would you use ExampleStack? Six participants would like to use ExampleStack when learning APIs, since it provided multiple GitHub code fragments that use the same API in different contexts with critical safety checks and exception handling. Five participants mentioned that ExampleStack would be most useful for a lengthy example. P4 wrote, “the tool is very useful when the code is longer and hard to spot what to change at a glance.” Two participants wanted to use ExampleStack to identify missing points and assess different solutions, when writing a large-scale robust project.

In addition, P15 and P16 suggested to display similar code based on semantic similarity rather than just syntactic similarity, in order to find alternative implementations and potential optimization opportunities. P13 suggested to add indicators about whether a SO example is compilable or not.

(a) Compute distance between two coordinates [45] (b) Encode byte array to a hex string [47] (c) Add animation to an Android view [48]
Fig. 7: ExampleStack code template examples

Vii Threats to Validity

In terms of internal validity, our variation dataset may include coincidental clones, since GitHub developers may write code with similar functionality as a SO example. To mitigate this issue, we compare their timestamps and remove those GitHub clones that are created before the corresponding SO examples. We further create an adaptation set with explicitly attributed SO examples and compare the analysis results of both datasets for cross-validation. Figure 3 shows that the distribution of common adaptation patterns is similar between these two datasets. It would be valuable and useful to guide code adaptation by identifying the commonalities and variations between similar code, even for clones coming from independent but similar implementations.

In terms of external validity, when identifying common adaptation types, we follow the standard qualitative analysis procedure [27] to continuously inspect more samples till the insights are converging. However, we may still miss some adaptation types due to the small sample size. To mitigate this issue, the second author who was not involved in the manual inspection further manually labeled 100 more samples to validate the adaptation taxonomy (Section III-C). In addition, user study participants may not be representative of real Stack Overflow users. To mitigate this issue, we recruit both novice and experienced developers who use Stack Overflow on a regular basis. To generalize our findings to industrial settings, further studies with professional developers are needed.

In terms of construct validity, in the user study, we only measure whether ExampleStack inspires participants to identify and describe adaptation opportunities. We do not ask participants to fully integrate a SO example to a target program nor make it compile. Therefore, our finding does not imply time reduction in code integration.

Viii Related Work

Quality assessment of SO examples. Our work is inspired by previous studies that find SO examples are incomplete and inadequate [29, 23, 24, 9, 8, 12, 7]. Subramanian and Holmes find that the majority of SO snippets are free standing statements with no class or method headers [23]. Zhou et al. find that 86 of 200 accepted SO posts use deprecated APIs but only 3 of them are reported by other programmers [9]. Fischer et al. find that 29% of security-related code in SO is insecure and could potentially be copied to one million Android apps [8]. Zhang et al. contrast SO examples with API usage patterns mined from GitHub and detect potential API misuse in 31% of SO posts [7]. These findings motivate our investigation of adaptations and variations of SO examples.

Stack Overflow usage and attribution. Our work is motivated by the finding that developers often resort to online Q&A forums such as Stack Overflow [4, 3, 6, 5]. Despite the wide usage of SO, most developers are not aware of the SO licensing terms nor attribute to the code reused from SO [12, 5, 6]. Only 1.8% of GitHub repositories containing code from SO follow the licensing policy properly [5]. Almost one half developers admit copying code from SO without attribution and two thirds are not aware of the SO licensing implications. Based on these findings, we carefully construct a comprehensive dataset of reused code, including both explicitly attributed SO examples and potentially reused ones using clone detection, timestamp analysis, and URL references. Origin analysis can also be applied to match SO snippets with GitHub files [49, 50, 51, 52].

SO snippet retrieval and code integration. Previous support for reusing code from SO mostly focuses on helping developers locate relevant posts or snippets from the IDE [53, 54, 55, 15]. For example, Prompter retrieves related SO discussions based on the program context in Eclipse. SnipMatch supports light-weight code integration by renaming variables in a SO snippet based on corresponding variables in a target program [15]. Code correspondence techniques [56, 14] match code elements (e.g., variables, methods) to decide which code to copy, rename, or delete during copying and pasting. Our work differs by focusing on analysis of common adaptations and variations of SO examples.

Change types and taxonomy. There is a large body of literature for source code changes during software evolution [57, 58, 59]. Fluri et al. present a fine-grained taxonomy of code changes such as changing the return type and renaming a field, based on differences in abstract syntax trees [18]. Kim et al. analyze changes on “micro patterns” [60] in Java using software evolution data [17]. These studies investigate general change types in software evolution, while we quantify common adaptation and variation types using SO and GitHub code.

Program differencing and change template. Diff tools compute program differences between two programs [61, 62, 63, 64, 13]. However, they do not support analysis of one example with respect to multiple counterparts simultaneously. Lin et al. align multiple programs and visualize their variations [65]. However, they do not lift a code template to summarize the commonalities and variations between similar code. Several techniques construct code templates for the purpose of code search [66] or code transformation [67]. Glassman et al. design an interactive visualization called Examplore to help developers comprehend hundreds of similar but different API usages in GitHub [68]. Given an API method of interest, Examplore instantiates a pre-defined API usage skeleton and fills in details such as various guard conditions and succeeding API calls. ExampleStack is not limited to API usage and does not require a pre-defined skeleton.

Ix Conclusion

This paper provides a comprehensive analysis of common adaptation and variation patterns of online code examples by both overapproximating and underapproximating reused code from Stack Overflow to GitHub. Our quantitative analysis shows that the same type of adaptations and variations appears repetitively among different GitHub clones of the same SO example, and variation patterns resemble adaptation patterns. This implies that different GitHub developers may apply similar adaptations to the same example over and over again independently. This further motivates the design of ExampleStack, a Chrome extension that guides developers in adapting online code examples by unveiling the commonalities and variations of similar past adaptations. A user study with sixteen developers demonstrates that ExampleStack helps developers focus more on code safety and logic customization during code reuse, resulting in more complete and robust code.

Currently, ExampleStack only visualizes potential adaptations of a SO example within a web browser. As future work, we plan to build an Eclipse plugin that enables semi-automated integration of online code examples. It would be worthwhile to investigate how such a tool fits developer workflow and to compare it with other code integration techniques [14, 15].

Acknowledgment

Thanks to anonymous participants for the user study and anonymous reviewers for their valuable feedback. This work is supported by NSF grants CCF-1764077, CCF-1527923, CCF-1460325, CCF-1723773, ONR grant N00014-18-1-2037, Intel CAPA grant, and DARPA MUSE program.

References

  • [1] M. Umarji, S. E. Sim, and C. Lopes, “Archetypal internet-scale source code searching,” in IFIP International Conference on Open Source Systems.   Springer, 2008, pp. 257–263.
  • [2] R. E. Gallardo-Valencia and S. Elliott Sim, “Internet-scale code search,” in Proceedings of the 2009 ICSE Workshop on Search-Driven Development-Users, Infrastructure, Tools and Evaluation.   IEEE Computer Society, 2009, pp. 49–52.
  • [3] J. Brandt, P. J. Guo, J. Lewenstein, M. Dontcheva, and S. R. Klemmer, “Two studies of opportunistic programming: interleaving web foraging, learning, and writing code,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.   ACM, 2009, pp. 1589–1598.
  • [4] C. Sadowski, K. T. Stolee, and S. Elbaum, “How developers search for code: a case study,” in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering.   ACM, 2015, pp. 191–201.
  • [5] S. Baltes and S. Diehl, “Usage and attribution of stack overflow code snippets in github projects,” arXiv preprint arXiv:1802.02938, 2018.
  • [6] Y. Wu, S. Wang, C.-P. Bezemer, and K. Inoue, “How do developers utilize source code from stack overflow?” Empirical Software Engineering, pp. 1–37, 2018.
  • [7] T. Zhang, G. Upadhyaya, A. Reinhardt, H. Rajan, and M. Kim, “Are code examples on an online q&a forum reliable?: a study of api misuse on stack overflow,” in Proceedings of the 40th International Conference on Software Engineering.   ACM, 2018, pp. 886–896.
  • [8] F. Fischer, K. Böttinger, H. Xiao, C. Stransky, Y. Acar, M. Backes, and S. Fahl, “Stack overflow considered harmful? the impact of copy&paste on android application security,” in Security and Privacy (SP), 2017 IEEE Symposium on.   IEEE, 2017, pp. 121–136.
  • [9] J. Zhou and R. J. Walker, “Api deprecation: a retrospective analysis and detection method for code examples on the web,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering.   ACM, 2016, pp. 266–277.
  • [10] C. Treude and M. P. Robillard, “Understanding stack overflow code fragments,” in Proceedings of the 33rd International Conference on Software Maintenance and Evolution.   IEEE, 2017.
  • [11] H. Sajnani, V. Saini, J. Svajlenko, C. K. Roy, and C. V. Lopes, “Sourcerercc: Scaling code clone detection to big-code,” in Software Engineering (ICSE), 2016 IEEE/ACM 38th International Conference on.   IEEE, 2016, pp. 1157–1168.
  • [12] L. An, O. Mlouki, F. Khomh, and G. Antoniol, “Stack overflow: a code laundering platform?” in Software Analysis, Evolution and Reengineering (SANER), 2017 IEEE 24th International Conference on.   IEEE, 2017, pp. 283–293.
  • [13] J.-R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus, “Fine-grained and accurate source code differencing,” in Proceedings of the 29th ACM/IEEE international conference on Automated software engineering.   ACM, 2014, pp. 313–324.
  • [14] R. Cottrell, R. J. Walker, and J. Denzinger, “Semi-automating small-scale source code reuse via structural correspondence,” in Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering.   ACM, 2008, pp. 214–225.
  • [15] D. Wightman, Z. Ye, J. Brandt, and R. Vertegaal, “Snipmatch: Using source code context to enhance snippet retrieval and parameterization,” in Proceedings of the 25th annual ACM symposium on User interface software and technology.   ACM, 2012, pp. 219–228.
  • [16] M. Fowler, Refactoring: Improving the Design of Existing Code.   Addison-Wesley Professional, 2000.
  • [17] S. Kim, K. Pan, and E. J. Whitehead Jr, “Micro pattern evolution,” in Proceedings of the 2006 international workshop on Mining software repositories.   ACM, 2006, pp. 40–46.
  • [18] B. Fluri and H. C. Gall, “Classifying change types for qualifying change couplings,” in ICPC ’06: Proceedings of the 14th IEEE International Conference on Program Comprehension.   Washington, DC, USA: IEEE Computer Society, 2006, pp. 35–45.
  • [19] E. Kalliamvakou, G. Gousios, K. Blincoe, L. Singer, D. M. German, and D. Damian, “The promises and perils of mining github,” in Proceedings of the 11th working conference on mining software repositories.   ACM, 2014, pp. 92–101.
  • [20] C. V. Lopes, P. Maj, P. Martins, V. Saini, D. Yang, J. Zitny, H. Sajnani, and J. Vitek, “Déjàvu: a map of code duplicates on github,” Proceedings of the ACM on Programming Languages, vol. 1, p. 28, 2017.
  • [21] G. Gousios and D. Spinellis, “Ghtorrent: Github’s data from a firehose,” in Mining software repositories (msr), 2012 9th ieee working conference on.   IEEE, 2012, pp. 12–21.
  • [22] Stack Overflow data dump, 2016, https://archive.org/details/stackexchange, accessed on Oct 17, 2016.
  • [23] S. Subramanian and R. Holmes, “Making sense of online code snippets,” in Proceedings of the 10th Working Conference on Mining Software Repositories.   IEEE Press, 2013, pp. 85–88.
  • [24] D. Yang, A. Hussain, and C. V. Lopes, “From query to usable code: an analysis of stack overflow code snippets,” in Proceedings of the 13th International Workshop on Mining Software Repositories.   ACM, 2016, pp. 391–402.
  • [25] S. Subramanian, L. Inozemtseva, and R. Holmes, “Live api documentation,” in Proceedings of the 36th International Conference on Software Engineering.   ACM, 2014, pp. 643–652.
  • [26] D. Yang, P. Martins, V. Saini, and C. Lopes, “Stack overflow in github: any snippets there?” in 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR).   IEEE, 2017, pp. 280–290.
  • [27] B. L. Berg, H. Lune, and H. Lune, Qualitative Research Methods for the Social Sciences.   Pearson Boston, MA, 2004, vol. 5.
  • [28] E. Torlak and S. Chandra, “Effective interprocedural resource leak detection,” in Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1.   ACM, 2010, pp. 535–544.
  • [29] B. Dagenais and M. P. Robillard, “Recovering traceability links between an api and its learning resources,” in 2012 34th International Conference on Software Engineering (ICSE).   IEEE, 2012, pp. 47–57.
  • [30] Get OS-level system information, 2008, https://stackoverflow.com/questions/61727.
  • [31] A GitHub clone that gets the CPU usage, 2014, https://github.com/jomis/nomads/blob/master/nomads-framework/src/main/java/at/ac/tuwien/dsg/utilities/PerformanceMonitor.java#L44-L63.
  • [32] Adding new paths for native libraries at runtime in Java, 2013, https://stackoverflow.com/questions/15409446.
  • [33] A GitHub clone that adds new paths for native libraries at runtime in Java, 2014, https://github.com/armint/firesight-java/blob/master/src/main/java/org/firepick/firesight/utils/SharedLibLoader.java#L131-L153.
  • [34] How to get IP address of the device from code?, 2012, https://stackoverflow.com/questions/7899226.
  • [35] A GitHub clone about how to get IP address from an Android device., 2014, https://github.com/kalpeshp0310/GoogleNews/blob/master/app/src/main/java/com/kalpesh/googlenews/utils/Utils.java#L24-L39.
  • [36] JSlider question: Position after leftclick, 2009, https://stackoverflow.com/questions/518672.
  • [37] A GitHub clone about JSlide, 2014, https://github.com/changkon/Pluripartite/tree/master/src/se206/a03/MediaPanel.java#L329-L339.
  • [38] Another GitHub clone about JSlide, 2014, https://github.com/changkon/Pluripartite/tree/master/src/se206/a03/MediaPanel.java#L343-L353.
  • [39] A GitHub clone that downloads videos from YouTube, 2014, https://github.com/instance01/YoutubeDownloaderScript/blob/master/IYoutubeDownloader.java#L148-L193.
  • [40] Youtube data API : Get access to media stream and play (JAVA), 2011, https://stackoverflow.com/questions/4834369.
  • [41] W. M. Shyu, E. Grosse, and W. S. Cleveland, “Local regression models,” in Statistical models in S.   Routledge, 2017, pp. 309–376.
  • [42] S. M. Nasehi, J. Sillito, F. Maurer, and C. Burns, “What makes a good code example?: A study of programming q&a in stackoverflow,” in Software Maintenance (ICSM), 2012 28th IEEE International Conference on.   IEEE, 2012, pp. 25–34.
  • [43] Add Lat and Long to ArrayList, 2016, https://stackoverflow.com/questions/37273871.
  • [44] ColorBrewer: Color Advice for Maps, 2018, http://colorbrewer2.org.
  • [45] Calculate distance in meters when you know longitude and latitude in java, 2017, https://stackoverflow.com/questions/837957.
  • [46] Construct a relative path in Java from two absolute paths, 2015, https://stackoverflow.com/questions/3054692.
  • [47] How to use SHA-256 with Android, 2014, https://stackoverflow.com/questions/25803281.
  • [48] How can I add animations to existing UI components?, 2015, https://stackoverflow.com/questions/33464536.
  • [49] Q. Tu and M. W. Godfrey, “An integrated approach for studying architectural evolution,” in Program Comprehension, 2002. Proceedings. 10th International Workshop on.   IEEE, 2002, pp. 127–136.
  • [50] M. Godfrey and Q. Tu, “Tracking structural evolution using origin analysis,” in Proceedings of the international workshop on Principles of software evolution.   ACM, 2002, pp. 117–119.
  • [51] L. Zou and M. W. Godfrey, “Detecting merging and splitting using origin analysis,” in null.   IEEE, 2003, p. 146.
  • [52] M. W. Godfrey and L. Zou, “Using origin analysis to detect merging and splitting of source code entities,” IEEE Transactions on Software Engineering, vol. 31, no. 2, pp. 166–181, 2005.
  • [53] A. Bacchelli, L. Ponzanelli, and M. Lanza, “Harnessing stack overflow for the ide,” in Proceedings of the Third International Workshop on Recommendation Systems for Software Engineering.   IEEE Press, 2012, pp. 26–30.
  • [54] L. Ponzanelli, A. Bacchelli, and M. Lanza, “Seahawk: Stack overflow in the ide,” in Proceedings of the 2013 International Conference on Software Engineering.   IEEE Press, 2013, pp. 1295–1298.
  • [55] L. Ponzanelli, G. Bavota, M. Di Penta, R. Oliveto, and M. Lanza, “Mining stackoverflow to turn the ide into a self-confident programming prompter,” in Proceedings of the 11th Working Conference on Mining Software Repositories.   ACM, 2014, pp. 102–111.
  • [56] R. Holmes and R. J. Walker, “Supporting the investigation and planning of pragmatic reuse tasks,” in Proceedings of the 29th international conference on Software Engineering.   IEEE Computer Society, 2007, pp. 447–457.
  • [57] S. Kim, K. Pan, and E. E. J. Whitehead, Jr., “Memories of bug fixes,” in Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. SIGSOFT ’06/FSE-14.   New York, NY, USA: ACM, 2006, pp. 35–45. [Online]. Available: http://doi.acm.org/10.1145/1181775.1181781
  • [58] M. Fischer, J. Oberleitner, J. Ratzinger, and H. Gall, “Mining evolution data of a product family,” in MSR ’05: Proceedings of the 2005 International Workshop on Mining Software Repositories.   New York, NY, USA: ACM, 2005, pp. 1–5.
  • [59] D. Dig and R. Johnson, “How do APIs evolve? a story of refactoring,” Journal of software maintenance and evolution: Research and Practice, vol. 18, no. 2, pp. 83–107, 2006.
  • [60] J. Y. Gil and I. Maman, “Micro patterns in java code,” ACM SIGPLAN Notices, vol. 40, no. 10, pp. 97–116, 2005.
  • [61] W. Miller and E. W. Myers, “A file comparison program,” Software: Practice and Experience, vol. 15, no. 11, pp. 1025–1040, 1985.
  • [62] W. Yang, “Identifying syntactic differences between two programs,” Software – Practice & Experience, vol. 21, no. 7, pp. 739–755, 1991. [Online]. Available: citeseer.ist.psu.edu/yang91identifying.html
  • [63] S. S. Chawathe, A. Rajaraman, H. Garcia-Molina, and J. Widom, “Change detection in hierarchically structured information,” in SIGMOD ’96: Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data.   New York, NY, USA: ACM, 1996, pp. 493–504.
  • [64] B. Fluri, M. Wuersch, M. PInzger, and H. Gall, “Change distilling: Tree differencing for fine-grained source code change extraction,” IEEE Transactions on software engineering, vol. 33, no. 11, 2007.
  • [65] Y. Lin, Z. Xing, Y. Xue, Y. Liu, X. Peng, J. Sun, and W. Zhao, “Detecting differences across multiple instances of code clones.” in ICSE, 2014, pp. 164–174.
  • [66] T. Zhang, M. Song, J. Pinedo, and M. Kim, “Interactive code review for systematic changes,” in Proceedings of the 37th International Conference on Software Engineering-Volume 1.   IEEE Press, 2015, pp. 111–122.
  • [67] N. Meng, M. Kim, and K. S. McKinley, “Lase: locating and applying systematic edits by learning from examples,” in Proceedings of the 2013 International Conference on Software Engineering.   IEEE Press, 2013, pp. 502–511.
  • [68] E. L. Glassman, T. Zhang, B. Hartmann, and M. Kim, “Visualizing api usage examples at scale,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.   ACM, 2018, p. 580.