Find, Understand, and Extend Development Screencasts on YouTube

07/27/2017 ∙ by Mathias Ellmann, et al. ∙ University of Hamburg 0

A software development screencast is a video that captures the screen of a developer working on a particular task while explaining its implementation details. Due to the increased popularity of software development screencasts (e.g., available on YouTube), we study how and to what extent they can be used as additional source of knowledge to answer developer's questions about, for example, the use of a specific API. We first differentiate between development and other types of screencasts using video frame analysis. By using the Cosine algorithm, developers can expect ten development screencasts in the top 20 out of 100 different YouTube videos. We then extracted popular development topics on which screencasts are reporting on YouTube: database operations, system set-up, plug-in development, game development, and testing. Besides, we found six recurring tasks performed in development screencasts, such as object usage and UI operations. Finally, we conducted a similarity analysis by considering only the spoken words (i.e., the screencast transcripts but not the text that might appear in a scene) to link API documents, such as the Javadoc, to the appropriate screencasts. By using Cosine similarity, we identified 38 relevant documents in the top 20 out of 9455 API documents.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Software development is a knowledge-intensive work (Maalej:TOSEM:2014, ; Sillito:TSE:2008, ; Ko:TSE:2006, ; Fritz:ICSE:2010, ) in which developers spend a substantial amount of their time looking for information (Ko:TSE:2006, )—e.g., how to fix a bug or how to use an API. They access and share knowledge through various media and sources, including API documentation (Maalej:TSE:2013, ), Q&A sites, wikis, or tutorials (treude2016augmenting, ; WhatIsSc21:online, ; Maalej:TOSEM:2014, ). Regardless of how rich or popular a single knowledge source might be, it barely satisfies all the information needs of a specific developer within a certain context (Maalej:ASE:2009, ; Fritz:ICSE:2010, ; Maalej:TOSEM:2014, ).

Nowadays, there is a growing trend to use videos instead of text to capture and share knowledge (lethbridge2003software, ). Video content, from movies to webinars and screencasts, accounts for more than half of the internet traffic111 Software developers are concerned with this trend as they are using more and more video resources in their job (macleod2015code, ; tiarks2014does, ). In particular, development screencasts are getting popular among technical bloggers222
and on general purpose video-sharing platforms such as YouTube.

A screencast is a “digital movie in which the setting is partly or wholly a computer screen, and in which audio narration describes the on-screen action” (WhatIsSc21:online, ). In particular, a development screencast is created by a developer to describe and visualize a certain development task (macleod2015code, ). Screencasts are more comprehensive than plain text since they capture, in the form of video and audio, human interaction(laptev2008learning, )—e.g., following the instruction of a developer.

YouTube333 does not yet offer the possibility to explicitly search for a development screencast that explains how to accomplish a specific development task (Maalej:JSS:2016, ; treude2016augmenting, ) in a certain development context (Maalej:CSD:2015, ; Maalej:JSS:2016, ).

Moreover, there is a lack of understanding about the different types of videos—i.e., development screencast (macleod2015code, ) cannot be distinguished from other types of videos.

In a development screencast, a software developer performs a task which can be assigned to a topic and to a specific context (Maalej:CSD:2015, ; macleod2015code, ), such as an IDE, a web browser or a virtual machine. There are recurring tasks performed in several development screencasts and in different development contexts that require the consultation of API documents(treude2015tasknav, ; Maalej:TSE:2013, ). The text transcript of the screencast audio contains searchable and indexable technical terms that can refer to other artifacts, such as an API or a tool. For example, the screencast presented in Figure 1 can be extended with an API document—as shown in Figure 2—since it contains references to classes, methods and other units.

Based on the mentioned observations, we will tackle the following research questions in this paper:

  1. RQ1: Is it possible to distinctively identify a developer screencasts from other video types based on a frame analysis?

  2. RQ2: Which development tasks are performed in software development screencasts?

  3. RQ3: Can a development screencast be extended with relevant API reference documents by considering only the spoken words?

In particular, in Section 1, we evaluate different algorithms (Jaccard, Cosine & LSI) and their performance in identifying development screencasts by simply considering video frames. In Section 2, we use the visualization techniques introduced by Sievert et al. (sievert2014ldavis, ) and Chuang et al. (chuang2012termite, ) to identify the software development topics and the recurring development tasks present in development screencasts. In Section 3, we analyse the similarity between a task performed in the screencast and the relevant API documents using the TaskNav tool (treude2015tasknav, ). Section 5 discusses the results, while Section 6 concludes the paper and describes future work.

Figure 1. Example of a development screencast on YouTube. It contains a video (screencast), a title describing the software development task, and a transcript.

A development screencast is a special type of video which cannot be directly searched on YouTube due to the lack of a pre-defined category. Nonetheless, a development screencast is characterized by the small number of scenes, length, and the specific actions (e.g., inputting text) performed by a developer (macleod2015code, ). Moreover, in their screencasts, developers use several tools (e.g., and IDE or code editor, a terminal, a browser) to perform a development task. In this section, we present how we used the information available on the video frames to distinguish a development screencast from other types of videos.

We sampled a set of frames (i.e., a rectangular raster of pixels) from different videos and compared their stability and variation. We define the similarity between two frames as . The frame similarity of a video is calculated by with n as the number of analyzed frames. Frame is the direct successor of frame and . For each video, we sampled a frame every 10 seconds.

Figure 2. Example of an API reference document. It contains class and method definitions as well as the descriptions of it.

We randomly selected 100 YouTube videos associated to one of the following types:

  • Development screencast (n=20): videos showing software development activities in several programming languages, such as PHP, Python, Java, SQL and C#. Different tools (e.g., an IDE, code editor, or simple a web browser) are used to perform a task.

  • Non-development screencasts (n=20): videos showing the desktop of a user solving problems unrelated to software development, including mathematical problems, game tutorials, or software utilization.

  • Non-development, non-screencast (n=20): videos showing how to perform a task not related to software development (e.g., learn Spanish, or how-to change a phone screen) in which a computer screen is not recorded.

  • Others (n=40)444Provided from the owner of videos in none of the above categories (e.g., a music video). This set contains 40 videos because most of them had a short length (2-3 minutes).

The sample contains approximately 2000 frames for each video type. Every frame contains a particular number of color information that changes in the different scenes throughout the developer screencast—for example, when using an IDE, a web browser or a terminal.

Figure 3. Frame similarity of development screencasts compared to other video types (using cosine similarity values).

The similarity between two frames was calculated using the Jaccard coefficient, Cosine similarity, and LSI. Each color information per pixel is considered a bag of words (wang2008spatial, ). The Jaccard coefficient is used to measure the similarity between two sets of data. The cardinality of the intersection is divided by the cardinality of union of the sets (huang2008similarity, ). The similarity value of the Jaccard coefficient ranges between 0 and 1. If the documents and contain the same set of objects, the coefficient is one (or zero in case the documents do not have objects in common). The similarity between the two documents and is

The Cosine approach is commonly used for a similarity comparison of documents (ahasanuzzaman2016mining, ),(conrad2003online, ),(park2002analysis, )

. Documents are converted into term vectors to compute their

Cosine similarity, which is quantified as the angle between these vectors and ranges between 0 and 1(huang2008similarity, )

. Finally, the LSI ranges between -1 and +1; it uses term frequencies for dimensionality reduction, followed by a singular value decomposition (SVD)

(mihalcea2006corpus, ). We use the Cosine and LSI algorithms to evaluate the frequency of scene switches in a video. The Jaccard algorithm is more sensitive than Cosine and LSI as the latter two only recognize a low number of scene switches and moving objects (mouse, keyboard, etc.) used in the development screencasts.

The analysis of 2127 frames from 20 development screencasts shows that the values of Cosine and LSI are close to 1.0, indicating that, in a development screencasts, there is only a small number of scene switches. The Jaccard similarity has an average value of 0.768, showing that small objects are moved. We analyzed the similarity distributions of the four sets of videos using each algorithm.

The highest concentration of similar values for the development screencasts can be calculated using the Cosine algorithm (see Figure 3). For the Jaccard algorithm, the characteristics of the distribution varies a lot, making it difficult to identify a developer screencast from other types of video. The LSI algorithm has similar distributions. On the other hand, the Cosine algorithm shows a higher concentration, particularly for Developer Screencasts. Thus, it is better suited to distinguish developer screencasts from other video types.

We identified 20 development screencast within other video types (n = 100) by using the Cosine algorithm. We calculated the frame similarity of every video type and ranked the videos based on their Cosine similarity (in descending order). All developer screencasts are correctly predicted until the first 45 recommendations due to the high concentration of similarity values for development screencasts with respect to other video types. Within a list of 20 videos, we could identify 55% of the development screencasts. In other words, developers can expect to correctly identify over ten development screencast in a list of 20 different YouTube videos with the support of the Cosine algorithm. ( = 0.028, = 0.55, and = 0.052 (robillard2014recommendation, )).

To answer RQ1, we found that the Cosine algorithm is better suited to distinguish development screencast from other types of video due to its capability of better concentrating similarity values.

  • Development screencasts are different from other types of videos. Development screencasts seem to be more static—i.e., they have less scenes and objects.

  • The Cosine algorithm is the best, among the studied algorithms, at identifying a development screencast from other video types (highest concentration of similarity values).

  • All development screencasts could be identified within the first third of the retrieved items.

2. Topics of Software Development Screencasts

In this section, we analyze the software development topics of the task performed during a development screencast. To this end, we analyzed the title of the screencast as well as its audio transcripts and assigned the task performed on screen to different software development topics, such as implementation or system set-up.

Topic label Most relevant terms
6 Topics of development screencasts (tasks performed in SD according to their titles)
database operation with Java netbean, database, create, mysql
database operation with Android class, table, key, android

system set-up
run, make, Window, JDK
plug-in development connect, jframe, constructor, jbutton
game development game, develop, object, implement
testing selenium, use, program, file, write, learn

6 Topics within development screencasts (repeatable tasks performed in SD according to the transcripts)
API usage (Object/Classes) use, create, class, code, method, click, type

file, create, call, time, program
Lists list, move, get, create

UI operations
box, file, slider, inputs
Methods property, get, input, statement
System operations program, time, system, get

Table 1. Topics of and within Java screencasts.

Due to its popularity, we focused on development tasks performed using the Java programming language555 In particular, we searched for “how-to” development screencasts666How-tos are also among the most requested on Stack Overflow (treude2011programmers, ). Therefore, the search string used to retrieve relevant videos from YouTube was “Java + How to”. Our dataset includes 431 Java development screencasts; for all videos a transcript is available. We used the Python toolkit pyLDAvis (pyLDA2014, ; sievert2014ldavis, ; chuang2012termite, ) to identify the topics of software development in which the task are performed. Using the toolkit, it is possible to visualize different software development topics and to cluster them according to a varying number of LDA topics.

Figure 4. Topics of Java software development topic in relation to other topics.

We started by removing from the text all the special characters, numbers and the term “Java” which interferes with our analysis. We tuned the number of LDA topics until we reached a set of non-overlapping clusters that have enough distance between each other (see Figure 4). We also modified the relevance metric until we found the most relevant terms for a topic of software development

Figure 5. Distribution of terms for a software development screencast topic.

We perform two different analyses of software development topics found in software development screencasts. In the first analysis, we consider only the titles of the screencast to understand which software development topics is associated with the task performed in the screencast. The second analysis considers the textual transcript of the development screencasts. We only consider the nouns—extracted using the NLTK library (bird2009natural, )—since including also verbs caused the algorithm (LDA) to overfit. The output of this step was inaccurate because some verbs were included in addition to nouns. We listed them in Table 1 because we believe that they might be useful for interpreting the overall tasks performed during the screencast.

Table 1 summarizes the topics we found in the titles and in the transcripts of the development screencasts. Figure 4 shows the clusters of all the topics within the chosen software development screencasts. We stopped searching for the best number of topics when the topic clusters did not overlap anymore or when the topics became not visible. The size of the clusters represents the importance of the topic within the overall set of topics. Figure 5 shows the distribution of terms used to derive the topic of a task.

Database-related operations are some of the most popular topics discussed in developers screencasts. Similarly, the database management system MySQL is one of the most popular topic discussed on StackOverflow777 This observation could indicate the need for a system to support database operations in the IDE. Tutorials (tiarks2014does, ) as well as FAQs (timmann2015, ) provide a first entry to start developing a certain system.

Plug-in installation is also discussed in development screencasts. This topic can extend traditional tutorials as they provide knowledge for similar development tasks888 We observed some niche topic, such as game development, discussed in development screencasts. Software testing, a frequent software engineering activity(singer2010examination, ), is also covered in software development screencasts. This might reveal the need for screencasts that teach how to test software (shepard2001more, ). The use of a method, objects, or class in Java is a frequently occurring topic that could be augmented by API reference documents. In particular, list operations one of the most commonly occurring tasks showing the importance of this data type with respect to similar ones, such as hash-maps. Finally, UI operations are also shown to be one of the main activities.

Database operations are popular development tasks performed in development screencasts. Testing—a conventional task in software development— is also performed in development screencasts. Software development tasks such as methods and classes usage, are taught in software development screencast. An advanced search that considers the transcripts of a software development screencast can help finding tasks that matches the development context.

3. Analysis of similarity to API Documentation

Figure 6. Method to identify relevant API documents

Documents Retrieved Precision Recall
3 18/65 0.0514 0.30
5 22/65 0.062 0.367
10 33/65 0.094 0.524
20 38/65 0.0542 0.605

Table 2. Prediction results for 65 relevant documents (pages). The search space includes 9,455 API documents.

Our dataset contains 35 randomly selected Java development screencasts with high-quality transcripts (e.g., no misspelled words). We identified 1-3 relevant API reference documents for every development task that was performed in a development screencast (see research method in Figure 6). Initially, we used TaskNav—a popular tool for identifying relevant API documents based on natural language(treude2015tasknav, )—to find the relevant API document for a development task. The input parameter for this tool is a phrase (i.e., the title of the screencast) describing a certain development task. In several occasions, we could not find more than one relevant document because the screencast titles were not self-explanatory (for example, “Java How To: Dialog Boxes” or “How to make a Tic Tac Toe game in Java”), and a deeper look into the development screencast and its transcript was often required. Therefore, we qualitatively evaluated the recommendations of the API documents by defining documents as relevant if they contain the same classes (e.g., ArrayList) or method signatures (e.g., boolean contains(Object o)) mentioned in the screencasts as well as additional useful information needed when repeating the development tasks (e.g., implement ArrayList, LinkedList). We could identify 65 relevant documents from 9,455 potential candidates.

For the automatic identification of the relevant development screencasts, we have used the Cosine algorithm. We calculated the Cosine similarity value for each transcript of a developer screencast and each of the 9,455 Java API documents in the dataset which resulted in a ranked list of API documents ordered by their similarity values. For the evaluation of the recommendations, we calculated precision and recall

(robillard2014recommendation, ) (as identified by TaskNav using manual checking) within the top three, five, 10, and 20 Cosine positions (see Table 2). Precision shows the percentage of relevant documents identified within a predefined list, whereas recall shows how many relevant documents were identified from all the relevant ones within the same list.

For the best three retrieved results, we found that the transcripts frequently and clearly mention technical terms, such as class and method names contained in an API documentation page. Precision varies between 5 and 10%, with the best result being yielded by the top-10 retrieved pages. Table 2 shows that more than 50% of the relevant documentation pages were found in the top-10 retrieved positions. The percentage increases to more than 60 when the top-20 positions are considered. Overall, we could find 38 out 65 relevant documents until the top-20 in a set of 9,455 potential candidates by just analyzing the screencast transcript and ignoring the text that might appear in a scene (e.g., the source code in the IDE).

Moreover, we found that 98.8% of the API documents are below a similarity threshold of 0.12 while 55% of the relevant API documents are above the same threshold. Considering this threshold when searching for relevant API documents can help developers to find 55% of the relevant API documents in a list of 114 potential candidates from the overall corpora of 9,444 documents. Based on this results, we believe that development screencasts can be extended using API documents considering only their transcript.

  • By comparing only the audio transcript (the screencast transcripts but not the text that might appear in a scene, e.g. an IDE) of a development screencast with the API documentation, we could identify 38 out of the 65 relevant API documents in the first 20 positions.

  • There is a similarity threshold for relevant API documents. A high quantity of relevant API documents can be found above such threshold.

4. Related Work

MacLeod et al.(macleod2015code, ) report on the structure and content of development screencasts, as well as the different types of knowledge located in such screencasts. They studied how knowledge was presented and used axial coding extract higher-level topics. In our study, we associated Java screencasts to high-level topics, conducted a frame and a similarity analysis, and discussed how screencasts can be used to enrich API reference documentation

Treude et al.(treude2016augmenting, ) discuss how to link StackOverflow answers to the Java 6 SE SDK API. They use the Cosine approach to measure the similarity and LexRank to evaluate the relevance of the API documents. We extend their work by linking screencasts with API documents and by showing how similar they are.

Ponzanelli et al.(ponzanelli2016codetube, ) developed a recommender system to predict relevant YouTube videos999 for Android development. In addition to the audio transcripts, they used an OCR tool101010 to transfers the actual video information (e.g., slides or subtitles) into text. They focus on showing relevant StackOverflow posts for random YouTube videos.

A technique for linking API documentations to code examples is introduced by Subramanian et al. (subramanian2014live, ) and Chen et. al (chen2014asked, ). They defined a written code as a development task for which an API reference documentation is needed to get insights about the implementation.

5. Discussion and Limitations

Based on the similarity analysis, we found that frames in a development screencast are much alike in contrast to other types of video. Therefore, an identification of screencasts should be possible by using algorithms such as the Cosine similarity or LSI without knowing the actual title, tags, or the transcript of the video. Similarly, other types of videos (e.g., recorded interviews or slow motion) are also very static. We acknowledge that such approach, based on frames comparison, might mistakenly find these other types of static videos.

The analysis of the development topics showed that development screencasts contain knowledge provided in API reference documentation. Thus, API reference documents can extend a development screencast to provide additional implementation details, making it an attractive media for those developers who do not read documentation (lethbridge2003software, ). By leveraging our results, a simple tool—e.g., based on Cosine similarity calculation—can suggest relevant documentation pages from a large corpus, like the Java SDK documentation, with a 61% recall for a list of 20 items.

This preliminary study focuses on screencasts related to a specific programming language. However, there is a broad range of other development screencasts which tackle the same topics but in a different manner, or which use different programming languages with different syntax, semantics or specific tools. Therefore, development screencasts might differ according to the tools used, or to the software engineering activities and phenomena.

The selection of the dataset might thus have influenced the study results. We used the title to understand the tasks performed in a development screencast, and the transcripts to understand its subtasks. Those two elements (i.e., titles and transcripts) complement each other. For example, if a developer wants to know how to use lists, files and methods in a programming language like Java she might search them through an algorithm that considers the transcripts. In this way, the developers can find tasks that match the development context of interest, such as specific IDEs or libraries.

We found that UI operations—one of the most important activity performed when comprehending software (maalej2014comprehension, )—are also largely performed in development screencasts. By watching screencasts, developers can understand how other developer debugged and solved similar problems.

The transcripts we obtained might miss important terms, or include misspelled ones. This can impact the comparison of those transcripts with the API documentation pages, leading to poor results. We studied and manually inspected 35 screencasts and their transcripts.

Building a large dataset using the YouTube API poses some limitations since they only returns a limited number of search results111111 Thus, multiple searches, with different search terms, need to be performed. Moreover, the persistence of retrieved data is not guaranteed due to the possible deletion of the videos included in our sample.

The library we used, could not completely identify and remove the verbs or stop words from the title or the transcripts. Therefore, a replication of this study could lead to different results. We recommend to use the NLTK and the pyLDAvis library to pre-process the titles and the transcripts as well as to summarize the topics of the tasks. Although different people from different countries might create developments screencasts, we did not evaluate the language quality of the screencasts which might also influence our results.

When performing a development task there is often the need for additional information to be gathered—for example, from Stack Overflow, YouTube or an API documentation. Combining all of them mean to use different types of information to perform a development task.

We conclude that software development screencasts can help developers to search for recurring development tasks in a specific context (e.g., within an IDE) independently from the topic of software development.

6. Conclusion and Future Work

We analyzed different development screencasts on YouTube and found six main topics for the Java programming language.

A software development screencast is a particular type of video in which developers perform a tasks by focusing on relevant tools. Development screencasts are not much different from other types of screencasts. We found that frame similarity can be used to detect a development screencast on YouTube. Development screencasts can be extended by API documents to bette support software developers. We found that more than half of the relevant API documents could be provided within a list of 20 items. A Cosine comparison between a screencast and a large API documentation corpus is only a preliminary, simple approach to offer developers the most relevant documents.

This paper provided a first insight on how to categorize and identify development screencasts, and how to enrich them with API documentation. A further extension of our approach will focus on extracting the content of the development screencast—e.g., the code showed on the screen when using an IDE—to reach a higher precision/recall when identifying development screencast.

There is also further work needed to determine the different types of knowledge (macleod2015code, ; Maalej:TSE:2013, ) located in screencasts to achieve a more fine-grained and precise mapping between the API reference documentation and the API elements within the IDE. This approach might require labeling every unique piece of knowledge within a screencast and use video and image features. We believe that the community needs to study which types of screencasts are useful for which developers in which situations.


  • (1) W. Maalej, R. Tiarks, T. Roehm, and R. Koschke, “On the comprehension of program comprehension,” ACM Transactions on Software Engineering and Methodology, vol. 23, pp. 31:1–31:37, Sept. 2014.
  • (2) J. Sillito, G. C. Murphy, and K. De Volder, “Asking and answering questions during a programming change task,” IEEE Transactions on Software Engineering, vol. 34, no. 4, pp. 434–451, 2008.
  • (3) A. J. Ko, B. A. Myers, M. J. Coblenz, and H. H. Aung, “An exploratory study of how developers seek, relate, and collect relevant information during software maintenance tasks,” IEEE Transactions on software engineering, vol. 32, no. 12, 2006.
  • (4) T. Fritz and G. C. Murphy, “Using information fragments to answer the questions developers ask,” in Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1, pp. 175–184, ACM, 2010.
  • (5) W. Maalej and M. P. Robillard, “Patterns of knowledge in api reference documentation,” IEEE Transactions on Software Engineering, vol. 39, no. 9, pp. 1264–1282, 2013.
  • (6) C. Treude and M. P. Robillard, “Augmenting api documentation with insights from stack overflow,” in Proceedings of the 38th International Conference on Software Engineering, pp. 392–403, ACM, 2016.
  • (7) J. Udell, “What is screencasting - o’reilly media.”, November 2005. (Accessed on 11/01/2016).
  • (8) W. Maalej, “Task-first or context-first? tool integration revisited,” in Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering, pp. 344–355, IEEE Computer Society, 2009.
  • (9) T. C. Lethbridge, J. Singer, and A. Forward, “How software engineers use documentation: The state of the practice,” IEEE software, vol. 20, no. 6, pp. 35–39, 2003.
  • (10) L. MacLeod, M.-A. Storey, and A. Bergen, “Code, camera, action: How software developers document and share program knowledge using youtube,” in Proceedings of the 2015 IEEE 23rd International Conference on Program Comprehension, pp. 104–114, IEEE Press, 2015.
  • (11) R. Tiarks and W. Maalej, “How does a typical tutorial for mobile development look like?,” in Proceedings of the 11th Working Conference on Mining Software Repositories, pp. 272–281, ACM, 2014.
  • (12) I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8, IEEE, 2008.
  • (13) W. Maalej, M. Ellmann, and R. Robbes, “Using contexts similarity to predict relationships between tasks,” Journal of Systems and Software, 2016.
  • (14) W. Maalej and M. Ellmann, “On the similarity of task contexts,” in Proceedings of the Second International Workshop on Context for Software Development, pp. 8–12, IEEE Press, 2015.
  • (15) C. Treude, M. Sicard, M. Klocke, and M. Robillard, “Tasknav: Task-based navigation of software documentation,” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 2, pp. 649–652, IEEE, 2015.
  • (16) C. Sievert and K. E. Shirley, “Ldavis: A method for visualizing and interpreting topics,” in Proceedings of the workshop on interactive language learning, visualization, and interfaces, pp. 63–70, 2014.
  • (17) J. Chuang, C. D. Manning, and J. Heer, “Termite: Visualization techniques for assessing textual topic models,” in Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 74–77, ACM, 2012.
  • (18) X. Wang and E. Grimson, “Spatial latent dirichlet allocation,” in Advances in neural information processing systems, pp. 1577–1584, 2008.
  • (19) A. Huang, “Similarity measures for text document clustering,” in Proceedings of the sixth new zealand computer science research student conference (NZCSRSC2008), Christchurch, New Zealand, pp. 49–56, 2008.
  • (20) M. Ahasanuzzaman, M. Asaduzzaman, C. K. Roy, and K. A. Schneider, “Mining duplicate questions in stack overflow,” in Proceedings of the 13th International Conference on Mining Software Repositories, pp. 402–412, ACM, 2016.
  • (21) J. G. Conrad, X. S. Guo, and C. P. Schriber, “Online duplicate document detection: signature reliability in a dynamic retrieval environment,” in Proceedings of the twelfth international conference on Information and knowledge management, pp. 443–452, ACM, 2003.
  • (22) S.-T. Park, D. M. Pennock, C. L. Giles, and R. Krovetz, “Analysis of lexical signatures for finding lost or related documents,” in Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 11–18, ACM, 2002.
  • (23) R. Mihalcea, C. Corley, and C. Strapparava, “Corpus-based and knowledge-based measures of text semantic similarity,” in AAAI, vol. 6, pp. 775–780, 2006.
  • (24) M. P. Robillard, W. Maalej, R. J. Walker, and T. Zimmermann, Recommendation systems in software engineering. Springer, 2014.
  • (25) C. Treude, O. Barzilay, and M.-A. Storey, “How do programmers ask and answer questions on the web?: Nier track,” in Software Engineering (ICSE), 2011 33rd International Conference on, pp. 804–807, IEEE, 2011.
  • (26) pyLDAvis, “Python library for interactive topic model visualization,” 2014.
  • (27) S. Bird, E. Klein, and E. Loper, Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”, 2009.
  • (28) I. Timmann, “An empirical study towards a quality model for faqs in software development,” tech. rep., 2015.
  • (29) J. Singer, T. Lethbridge, N. Vinson, and N. Anquetil, “An examination of software engineering work practices,” in CASCON First Decade High Impact Papers, pp. 174–188, IBM Corp., 2010.
  • (30) T. Shepard, M. Lamb, and D. Kelly, “More testing should be taught,” Communications of the ACM, vol. 44, no. 6, pp. 103–108, 2001.
  • (31) L. Ponzanelli, G. Bavota, A. Mocci, M. Di Penta, R. Oliveto, B. Russo, S. Haiduc, and M. Lanza, “Codetube: extracting relevant fragments from software development video tutorials,” in Proceedings of the 38th International Conference on Software Engineering Companion, pp. 645–648, ACM, 2016.
  • (32) S. Subramanian, L. Inozemtseva, and R. Holmes, “Live api documentation,” in Proceedings of the 36th International Conference on Software Engineering, pp. 643–652, ACM, 2014.
  • (33) C. Chen and K. Zhang, “Who asked what: Integrating crowdsourced faqs into api documentation,” in Companion Proceedings of the 36th International Conference on Software Engineering, pp. 456–459, ACM, 2014.
  • (34) W. Maalej, R. Tiarks, T. Roehm, and R. Koschke, “On the comprehension of program comprehension,” ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 23, no. 4, p. 31, 2014.