Why Software Projects need Heroes (Lessons Learned from 1100+ Projects)

04/22/2019 ∙ by Suvodeep Majumder, et al. ∙ 0

A "hero" project is one where 80 the 20 since they might cause bottlenecks in development and communication. However, there is little empirical evidence on this matter. Further, recent studies show that such hero projects are very prevalent. Accordingly, this paper explores the effect of having heroes in project, from a code quality perspective. We identify the heroes developer communities in 1100+ open source GitHub projects. Based on the analysis, we find that (a) hero projects are majorly all projects; and (b) the commits from "hero developers" (who contribute most to the code) result in far fewer bugs than other developers. That is, contrary to the literature, heroes are standard and very useful part of modern open source projects.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A “hero” project is one where 80% or more of the contributions come from 20% of developers. In the literature, such hero projects are deprecated since, it is said, they are bottlenecks that slows project development and causes information loss [1, 2, 3, 4, 5, 6].

Recent studies have motivated a re-examination of the implications of heroes. In 2018, Agrawal et al. [7] studied 661 open source projects and 171 in-house proprietary projects. In that sample, over 89% of all projects were hero-based111This text use “hero” for women and men since recent publications use it to denote admired people of all genders– see bit.ly/2UhJCek.. Only in small open source projects (with under 15 core developers) where non-hero projects were more prevalent.

Fig. 1: An example of social interaction graph generated from our data. The number of nodes equals the number of unique people participating in issue conversation about a commit. The existence and width of each edge represents the frequency of conversation between pairs of developers. Hero programmers are those nodes which have very high node degree (i.e. who have participated in lot of unique conversations). Note that, in this example data, these hero programmers are few in number.

To say the least, this widespread prevalence of heroes is at odds with established wisdom in the SE literature

[1, 2, 3, 4, 5, 8, 9, 10]. Hence, it is now an open and pressing issue to understand why so many projects are hero-based. To that end this paper checks the the Agrawal et al. [7] result. All of project data recollected from scratch from double the number of open source projects (over 1100 projects) than used by Agrawal et al. Also, we use a different method for recognizing a hero project. Agrawal et al. just counted the number of commits made by each developer. In this study, we say heroes are those who participate in 80% (or more) of the communications associated by a commit.

Despite our different ways to recognize “heroes” and despite our much larger sample, we come to similar conclusions as Agrawal et al. We find that 85% of our projects contain heroes, which is very similar to the Agrawal et al. result. More importantly, we can explain why heroes are more important. As shown below, our “hero” commit patterns (where “heroes” are those that talk the most to other developers) are associated with dramatically fewer defects than the commits from non-heroes (who talk to fewer people prior to pushing a commit).

This is not the first paper to commend the use of hero developers. For example, in 1975 Brooks [11] proposed basing programming teams around a small number of “chief programmers” (which we would call “heroes”) who are supported by a large number of support staff (Brooks’s analogy was the operating theater where one surgeon is supported by one or two anesthetists, several nurses, clerical staff, etc). The Agile Alliance [12] and Bach et al. [13] believed that heroes are the core ingredients in successful software projects saying “… the central issue is the human processor - the hero who steps up and solves the problems that lie between a need express and a need fulfilled.” In 2002, Mockus et al. [14] analyzed Apache and Mozilla projects to show the presence of heroes in the project and reported, surprisingly, their positive influence of projects.

That said, this article is different to the above since:

  1. We clearly demonstrate the benefits of hero-based development, which is contrary to much prior pessimism [1, 2, 3, 4, 5, 6].

  2. Our conclusions come from over 1100+ projects, whereas prior work commented on heroes using data from just a handful of projects [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25].

  3. Our conclusions come from very recent projects instead of decades-old data [26, 27, 24, 28, 21, 29].

  4. We show curves that precisely illustrate the effects on code quality for different levels of communication. This is different to prior work that only offered general qualitative principles [30, 31, 32, 33, 34, 35].

  5. As discussed in Section 2.2, this paper makes its conclusions using more metrics than prior work. Not only do we observe an effect (using process and resource metrics) to report the frequency of developer contribution, but we also report the consequence of that effect (by joining to produce metrics to reveal software quality).

  6. Instead of just reporting an effect (that heroes are common, as done by Agrawal et al. [7]) we can explain that effect (heroes are those that communicate more and that communication leads to fewer bugs).

  7. As a service to other researchers, all the scripts and data of this study can be downloaded from tiny.cc/git_mine.

Before beginning, we make some definitional points. Firstly, when we say 1100+ projects, that is shorthand for the following. Our results used the intersection of two graphs of code interaction graph (of who writes what code) from 1327 projects with a social interaction graph (who discusses what commits) from 1173 projects. .

Secondly, by code interaction graphs and social interaction graphs, we mean the following. Each graph has own nodes and edges . For code interaction graphs:

  • Individual developers have their own node ;

  • The edge connects two nodes and indicates if ever one developer has changed another developer’s code.

For social interaction graphs like Figure 1:

  • A node is created for each individual who has created or commented on an issue.

  • An edge indicates a communication between two individuals (as recorded in the issue tracking system. If this happens times then the weight .

Thirdly, our definition of “hero” is not “writes 80% of the software” since such a definition is hard to operationalize for modern agile projects (where many people might lend a hand to much of the code). Instead we say heroes are those that “participate in 80% of the discussions prior to the commits”. In social interaction graphs like Figure 1, those heroes can be visualized as the vertices with the most edges. As can be seen in that figure, most developers communicate infrequently while a small number of heroes communicate extensively to the rest of the community

The rest of the paper is organized into the following sections. Section 2 provides background information that directly relates to our research questions, in addition to laying out the motivation behind our work. Section 3.1 explains the data collection process and Section 3.2, a detailed description of our experimental setup and data is given, along with our performance criteria for evaluation is presented. It is followed by Section 4 the results of the experiments and answers to our research questions are detailed. Section 6 discusses threats to validity. Finally Section 7 concludes the paper.

2 Background And Prior Work

2.1 Heroism in Software Development

Heroism in software development is a widely studied topic. Various researchers have found the presence of heroes in software projects. For example:

  • Peterson analyzed the software development process on GitHub and found out a pattern that most development is done by a small group of developers [36]. He stated that for most of the GitHub projects, 95-100% commits come from very few developers.

  • In 2002, Koch et al. [37] studied the GNOME project and showed the presence of heroes through out the project history. They conjectured (without proof) that the small number of hero developers may allow easy communication and collaboration. Interestingly, they also showed there is no relation between developer’s time in the project and being a hero developer.

  • In 2005, Krishnamurthy [38] studied 100 open source projects to find that a few individuals are responsible for the main contribution of the project in most of the cases.

  • In 2006 and 2009, Robles et al. [39, 9] explored in their research the presence and evolution of heroes in open source software community.

  • In 2018, Agarwal et al. [7] stated that hero projects are very common. In fact, as software projects grow in size, nearly all projects become hero projects.

Most prior researchers deprecate heroism in software projects. They argue that

  • Having most of the work being dependent on a small number of heroes can become a bottleneck that slows down project development  [1, 4, 3, 2, 5].

  • In the case of hero projects, there is less collaboration between team members since there are few active team members. So, heroes are affecting the collaboration which is essential [40, 41].

This second point is problematic since, in the literature, studies that analyze distributed software development on social coding platforms like GitHub and Bitbucket [42, 43] remark on how social collaborations can reduce the cost and efforts of software development without degrading the quality of software. Distributed coding effort is beneficial for agile community-based programming practices which can in turn have higher customer satisfaction, lower defect rates, and faster development times  [44, 45]. Customer satisfaction, it is argued, is increased when faster development leads to:

  • Increasing the number of issues/bugs/enhancements being re-solved [14, 46, 47, 48, 49, 50].

  • Lowering the issues/bugs/enhancements resolution times [46].

Even more specifically, as to issues related to heroes, Bier et al. warn when project becomes complicated, it is always better to have a community of experts rather than having very few hero developers  [1]. Willams et al. have shown that hero programmers are often responsible for poorly documented software system as they remain more busy in coding rather than writing code related documents  [3]. Also, Wood et al. [5] caution that heroes are often code-focused but software development needs workers acting as more than just coders (testers, documentation authors, user-experience analysts).

Our summary of the above is as follows: with only isolated exceptions, most of the literature deprecates heroes. Yet as discussed in the introduction, many studies indicate that heroic projects are quite common. This mismatch between established theory and a widely observed empirical effect prompted the analysis discussed in this paper.

Ref
Year Citations
No. of
Projects
Product
Metric
Process
Metric
Personnel
Metric
[15] 1996 1994 8
[14] 2002 1961 2
[16] 1993 1268 2
[51] 2000 779 1
[52] 2006 772 5
[53] 2002 711 1
[54] 2007 636 3
[2] 2006 667 0
[55] 2005 622 2
[17] 2009 466 12
[38] 2002 466 100
[18] 2001 445 1
[56] 2001 406 1
[57] 2000 400 1
[58] 2008 398 4
[19] 1999 346 1
[37] 2002 305 1
[59] 1999 300 3
[41] 2007 298 0
[60] 2009 271 1
[61] 2010 256 10
[62] 2011 256 17
[20] 2011 233 2
[63] 2010 229 38
[64] 2004 223 30
[65] 2008 223 1
[66] 2008 218 1
[67] 2009 197 1
[68] 2005 186 SourceForge
[69] 2009 177 6
[70] 1998 172 2
[71] 2008 163 3
[21] 2012 163 11
[72] 2014 159 9
[73] 2006 131 1
[74] 2015 106 10
[75] 2012 103 905,470
[76] 2008 102 5
[9] 2006 99 21
[77] 2016 92 3
[29] 2014 87 1,398
[78] 2002 85 39,000
[79] 2015 85 0
[80] 2015 76 18
[44] 2013 68 0
[39] 2009 65 1
[81] 2014 61 GitHub
[82] 2010 59 6
[47] 2013 58 100,000
[83] 2009 54 1
[84] 2011 48 2
[22] 2013 37 3
[5] 2005 36 0
[85] 2010 30 2
[86] 2011 27 2
[46] 2014 24 2,000
[87] 2007 22 4
[88] 2016 19 235,000
[36] 2013 14 1,000
[89] 2015 12 1
[23] 2017 11 10
[24] 2018 11 15
[7] 2018 6 832
[27] 2017 5 12
[1] 2011 3 0
[26] 2018 2 4
[3] 2002 2 0
[4] 2012 2 0
[90] 2018 0 5
[91] 2018 0 1
[92] 2018 0 2
[25] 2018 0 1
[93] 2017 0 50
[94] 2018 0 0
TABLE I: Some results from e Google Scholar query (software heroes) or ((software metrics) and (code quality)). Hero-related publications have a color background. Rows colored in gray denote hero-related publication that offer no metrics in support of their arguments.

2.2 Software Quality Metrics

As shown by the No. of Projects column in Table I, our sample size (1100+ projects) is orders of magnitude larger than the typical paper in this arena. This table was generated as follows. Firstly, using Google Scholar we searched for “(software heroes) or ((software metrics) and (code quality))”. Secondly, for papers more than two years old, we pruned “non-influential papers” which we define has having less than ten citations per year. Thirdly, we read the papers to determine what kind of metrics they used. When presenting these results (in Table I), hero-related publications have a color background while rows colored in gray denote hero-related publication that offer no metrics in support of their arguments.

Table I also shows that most papers do not use a wide range of metrics. Xenos [95] distinguishes these kinds of metrics as follows. Product metrics are metrics that are directly related to the product itself, such as code statements, delivered executable, manuals, and strive to measure product quality, or attributes of the product that can be related to product quality. Also, process metrics focus on the process of software development and measure process characteristics, aiming to detect problems or to push forward successful practices. Lastly, personnel metrics (a.k.a. resource metrics) are those related to the resources required for software development and their performance. The capability, experience of each programmer and communication among all the programmers are related to product quality [30, 31, 34, 35]. In our work:

  • Code interaction graph is a process metrics;

  • Social interaction graphs is a personnel metrics;

  • Defect counts are product metrics.

(Aside: In this text we have used “resource” and “peronnel” interchangeably since, according to Center for Systems and Software Engineering,  [95] resource metrics relating to programmer quality or communication related metrics are also called personnel metrics.)

Fig. 2: Summary of Table I

This paper combines all three kinds of metrics and applies the combination to exploring the effects of heroism on software development. There are many previous studies that explore one or two of these types of metrics. Fig 2 summarizes Table I and shows that, in that sample, very few papers in software metrics and code quality combine insights from product and process and personnel metrics. To the best of our knowledge, this is the first paper in this areana to discuss heroism using product and process and personnel metrics.

Having worked with that data, we think we know why other publications do not report results using a wide range of metrics. Such reports require extensive and elaborate queries. The analysis of this paper required months of struggling with the GitHub API (and its queries/hour limits), followed by much scripting, followed by many tedious manual checks that our automatic tools were behaving sensibly. In all, we estimate that this paper required nine weeks of coding (40 hours per week) to join across process and product and personnel metrics.

Fig. 3: Distribution of projects depending on Number of Releases, Duration of Project, Number of Stars, Forks. Watchers and Developers. Box plots show the min to max range. Central boxes show the 25th, 50th, 75th percentiles.

3 Methodology

3.1 Data Collection

To perform our experiments we used open-source projects collected from GitHub. We recreate all the committed files to identify code changes in each commit file and identify developers using the GitHub API, we downloaded issue comments and events for a particular project, then use the git log command to mine the git commits added to the project throughout the project lifetime.

Using the information from each commit‘s message, we use the buggy commit identifier to label commits as buggy commit or not. First, we identified the commits which was used to fix some bugs in the code. Next, we used git blame to identify the last commit which introduced the bug.

GitHub has many repositories (over 67 million projects as of October, 2018). Many of these projects contain very short development cycles; are used for personal use; and are not be related to software development. Such projects may bias research findings. According, we used the established wisdom [96, 97] and some of our own engineering judgement to filter our data as follows:

  • Collaboration: refers to the number of pull requests. This is indicative of how many other peripheral developers work on this project. We required all projects to have at least one pull request.

  • Commits: The project must contain more than 20 commits.

  • Duration: The project must contain software development activity of at least 50 weeks.

  • Issues: The project must contain more than 10 issues.

  • Personal Purpose: The project must not be used and maintained by one person. The project must have at least eight contributors.

  • Software Development: The project must only be a placeholder for software development source code.

  • Project Documentation Followed: The projects should follow proper documentation standard to log proper commit comment and issue events to allow commit issue linkage.

  • Social network validation: The Social Network that is being build should have at least 8 connected nodes in both the communication and code interaction graph (this point is discussed further in 3.2.2 and 3.2.3).

To select our target projects, we used the “GitHub showcase project” list, favoring projects near the top of that list. The resulting projects have the ranges shown in Figure 3 and the languages shown in Figure 4. To understand this figure, we offer the following definitions:

  • Release: (based on Git tags) mark a specific point in your repository’s history. Number of releases defines different versions published, which signifies considerable amount of changes done between each version.

  • Duration: length of project from its inception to current date or project archive date, and signifies how long a project has been running and in active development phase.

  • Stars: signifies number of people liking a project or use them as bookmarks so they can follow what’s going on with the project later.

  • Forks: A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. This signifies how people are interested in the repository and actively thinking of modification of the original version.

  • Watcher: Watchers are GitHub users who have asked to be notified of activity in a repository, but have not become collaborators. This is a representative of people actively monitoring projects, because of possible interest or dependency.

  • Developer: Developers are the contributors to a project, who work on some code, and submit the code using commit to the codebase. The number of developers signifies the interest of developers in actively participating in the project and volume of the work.

Language Projects
Shell 517  
JavaScript 467  
Ruby 460  
HTML 393  
CSS 306  
Python 269  
C 215  
C++ 150  
Java 148  
Perl 132  
PHP 122  
Batchfile 100  
Objective-C 76  
M4 53  
Roff 50  
CoffeeScript 47  
Vim script 46  
C-Sharp 46  
Dockerfile 43  
CMake 42  
Emacs Lisp 39  
Gherkin 39  
Perl 6 38  
Fig. 4: Distribution of projects depending on languages. Many projects use combinations of languages to achieve their results. Here, we show majority language used in the project.

Ananya

Maria

Chen

David

Ananya

Maria

Chen

David

Ananya

Maria

Chen

David

Vadim

Ananya

Maria

Chen

David

Vadim

Fig. 5: Example of creating a social interaction graph between four GitHub developers. Step 1 (LHS): Ananya,Maria,Chen and David are four developers in a GitHub project. Step 2: Ananya creates one issue where Maria,Chen and David comment. So, we join Ananya-Maria,Ananya-Chen,Ananya-David with edge of weight 1. Step 3: We iterate for each developer, so all of them become connected and weight becomes 2 for all the edges. Step 4 (RHS): A new developer Vadim creates one new issue where Ananya and David comments. So, two new edges are introduced - (Ananya-Vadim(2), David-Vadim(2)) and weight of Ananya-David increases to 3

3.2 Metric Extraction

3.2.1 Process Metrics

Recall that the developer code interaction graph records who touched what code, where a developer is defined as a person who have ever committed any code into the codebase. We create that graph as follows:

  • Project commits were extracted from each branch in the git history.

  • Commits are extracted from the git log and stored in a file system.

  • To access the file changes in each commit we recreate the files that were modified in each commit by (a) continuously moving the git head chronologically on each branch. Changes were then identified using git diff on two consecutive git commits.

  • The graph is created by going through each commit and adding a node for the committer. Then we use git blame on the lines changed to find previous commits following a similar process of SZZ algorithm [98]. We identify all the developers of the commits from git blame and add them as a node as well.

  • After the nodes are created, the edges were drawn between the developer who changed the code, and whose code was changed. Those edges were weighted by the change size between the person.

3.2.2 Personnel Metrics

Recall that the developer social interaction graph records who talked to each other via commit comments. We create that graph as follows.

  • A node is created for the person who has created the issue, then another set of nodes are created for each person who has commented on the issue. So essentially in Social interaction graph each node in the graph is any person (developer or non-developer) ever created an issue or commented in an issue.

  • The nodes are connected by edges, which are created by (a) connecting the person who has created the issue to all the persons who have commented in that issue and (b) creating edges between all the persons who have commented on the issue, including the person who has created the issue.

  • The edges are weighted by the number of comments between two persons.

  • The weights are updated using the entire history of the projects. The creation and weight update is similar to Figure 5.

3.2.3 Product Metrics

This study explores the effects of social and code communication in code quality, by measuring buggy commit introduction, but in order to do so we do need to identify the commits that introduced the bug in the code from the historic project data. This is a challenging task since there is no direct way to find the commits or the person who is responsible for the bug/issue introduction. Hence, we proceeded as follows:

  • It first starts with all the commits from git log and identify the commit messages as this is often an excellent source of information regarding what the commit is about.

  • Then to use the commits messages for labeling it uses a natural language based processor, which includes stemming and other nltk preprocessors to normalize the commit messages.

  • Then to identify commit messages which is a representation of bug/issue fixing commits, a list of words and phrases extracted from previous studies of 1000+ projects (Open Source and Enterprise) are used. The system checked for these words and phrases in the commit messages and if found, it marks these as commits which fixed some bugs.

  • To perform sanity check a portion of the commits was manually verified using random sampling from different projects.

  • These labeled commits are now processed to extract the file changes as the process mentioned in process metrics section 3.2.1.

  • Next git blame is used to go back in the git history each line of the changes in each file to identify a responsible commit where each line was created or changed last time.

By this process, commits that were responsible for introduction of the bugs in the system/project can be found. We label these commits as buggy commits and label the author of the commit as the person responsible for introducing the bug.

3.2.4 Joining Across the Metrics

This study tries to answer the question, what is the relevance of heroes in the software projects. To answer these questions we join across all the metrics shown above. Specifically, using the two graphs, we calculate the node degree (number of edges touching a vertex) of the graphs. Note that higher degrees represents more communication or interaction. Next we compare results from those developers that are seen in the top 95% of the interactions (inferred from the social/code interaction graphs), versus all the others. Finally, top contributors (or heroes) and non-heroes were defined as :

(1)
(2)
(3)

where:

Number of Developers
Percentile(95)
The percentile rank of a score is the percentage
of scores in its frequency distribution that are
equal to or lower than it.
Adjacency matrix for graph where
denotes a connection.

Using these data and by applying the hero definition from formula (2) and (3) (look a the top 5%), we can find the developers who are responsible for 95% of the work or the hero developers.

Following this, in this study to find the effect of heroism, we compared the percentage of buggy commit introduced by a certain developer. For that purpose, we categorize the developers into 2 groups:

  • The hero developers, the core group of the developers of a certain project who makes regular changes in the codebase. In this study this is represented by the developers whose node degree is above 95th percentile of the node degree (developer’s communication and code interaction of the system graph).

  • The non-hero developers are all other developers; i.e. developers associated with nodes with a degree below the 95th percentile.

This study compares the performance these 2 sets of developers using the percentage of bugs introduced by them in the codebase.

4 Results

Our results are structured around three research questions:

RQ1:

How common are hero projects?

RQ2:

What impact does heroism have on code quality?

RQ3:

Does team size alter the above results?

We ask the third question since, when we discuss this work with our colleagues, a common comment is that heroes are better in projects of a certain size. Here, what “certain size” means can vary from person to person – some think heroes work best for small projects and others think heroes are an essential part of large projects. In any case, it is a common enough question to prompt its own particular investigation.

4.1 Rq1: How common are hero projects?

We say a project is a “hero project” if, when we isolate the developers who handle 95% of the interactions (or more), we see only 5% (or less) of the developers. To compute “interaction”, we mean the weighted in-degree counts to each vertex. The top 95% group are all vertices with a count above (where come from the smallest,largest counts).

This definition could be applied to either the code interaction graph or the social interaction graph. Regardless, the observed pattern is the same. As shown in Figure 6 and Figure 7, no matter what the source, the pattern is the same. Measured in terms of code or social interaction, hero projects comprise over 80% of our sample.

Fig. 6: RQ1 Results (Code Interaction): Distribution of hero projects and non-hero projects converted to percentage from developer code interaction perspective.
Fig. 7: RQ1 Results (Social Interaction): Distribution of hero projects and non-hero projects converted to percentage from developer social communication perspective.

4.2 Rq2: What impact does heroism have on code quality?

RQ2 explores what kind of effect heroism have on code quality. In order to explore this, we created the developer social interaction graph and developer code interaction graph, then we identified the developer responsible for introducing those bugs into the codebase. Then we find the percentage of buggy commits introduced by those developers by checking (a) the number of buggy commit introduced by those developers and (b) their number of total commits.

Fig 8 and Fig 9 (here y-axis represents the median of the bug introduction percentage for all hero and non-hero developers for each project respectively and x-axis is different projects used in this study) compares the performance of hero and non-hero developers (where the later are the 95% group that appear in the bottom 5% of the interaction scores) and summarized in Table II. In both figures, each x-point shows the hero and non-hero results from the same project (and projects are sorted by the non-hero observations). In those charts we note that:

  • There exists a large number of non-heroes that always produce buggy commits, 100% of the time (evidence: the flat right-hand-side regions of the non-hero plots in both figures). That population size of “always buggy” is around a third in Fig 8 and a fourth in Fig 9.

  • To say the least, heroes nearly always have fewer buggy commits than non-heroes. The 25th, 50th, 75th percentiles for both groups are shown in table II. This table clearly shows why heroes are so prevalent– they generate commits that are dramatically less buggy than non-heroes.

Fig. 8: RQ2 results (Code Interaction): Percentage of bugs introduced by hero and non-hero developers from developer code interaction prospective in Hero Projects .
Fig. 9: RQ2 results (Social Interaction): Percentage of bugs introduced by hero and non-hero developers from developer social interaction prospective in Hero Projects.
Percentile
metric Team Size group 25th 50th 75th
Code Interaction Small Hero 0.32 0.42 0.50
Non-Hero 0.46 0.55 0.75
Medium Hero 0.33 0.41 0.49
Non-Hero 0.50 0.60 1.00
Large Hero 0.36 0.42 0.50
Non-Hero 0.56 0.80 1.00
All Hero 0.35 0.42 0.5
Non-Hero 0.5 0.69 1.0
Social Interaction Small Hero 0.27 0.36 0.48
Non-Hero 0.39 0.50 0.71
Medium Hero 0.26 0.36 0.46
Non-Hero 0.40 0.56 0.82
Large Hero 0.27 0.37 0.46
Non-Hero 0.50 0.65 0.90
All Hero 0.27 0.37 0.46
Non-Hero 0.46 0.6 0.83
TABLE II: The All rows of this table show the RQ2 results. The other rows show the RQ3 results. The table summarizes of Fig 8, Fig 9, Fig 11 and Fig 10. The key feature of this table is that the bug injection distribution is barely changed after stratifying the data according to project size.
Number of Projects Collected
Project Size Developer Code Interaction Graph Developer Social Interaction Graph Common in Both
Small 308 203 203
Medium 367 329 329
Large 652 641 639
TABLE III: Project division based different project size for Developer Code Interaction graph and Developer Social Interaction graph
Fig. 10: R3 results (Code Interaction): Percentage of Hero and Non-Hero projects when divided by team Size. A visual comparison of this chart with Figure 8 show a very similar pattern.
Fig. 11: RQ3 results (Social Interaction): Percentage of Hero and Non-Hero projects when divided by team Size. A visual comparison of this chart with Figure 9 show a very similar pattern.

4.3 Rq3: Does team size alter the above results?

Recall that we ask this question since, when discussing this work with colleagues, we are often asked if heroes are less/more important to smaller/larger projects. In order to study the effect of team size, we apply the advice of Gautam et al. [99] who divided projects into three categories:

  • Small: A project is considered small if number of developers is greater than 8 but less than 15.

  • Medium: A project is considered medium if number of developers is greater than 15 but less than 30.

  • Large: A project is considered big if number of developers is greater than 30.

As shown in Figure 11 and Figure 10 and summarized in Table II, the bug injection distributions of heroes and non-heroes is barely changed after stratifying the data according to project size. Hence, when discussing the external validity of these conclusions, we need not explore issues of team size and hero prevalence.

5 Discussion

5.1 Hersleb Hypothesis (and Analogs)

We find it insightful to consider the above results in the context of the Hersleb hypothesis [100]. At the ICSE’14 keynote, Hersleb defined coding to be a socio-technical process where code and humans interact. According to what we call the the Hersleb hypothesis, the following anti-pattern is a strong predictor for defects:

  • If two code sections communicate…

  • But the programmers of those two sections do not…

  • Then that code section is more likely to be buggy.

To say that another way, coding is a social process and better code arises from better social interactions.

Many other researchers offer conclusions analogous to the Hersleb hypothesis Developer communication/interaction is often cited as one of the most important factor for a successful software development [101, 102, 103]. Many researchers have shown that successful communication between developers and adequate knowledge about the system plays a key role in successful software development [104, 105, 106]. As reported as early as 1975 in Brooks et al. text “The Mythical Man Month” [107], communication failure can lead to coordination problem, lack of system change knowledge in the projects as discussed by Brooks et al. in the Mythical Man-Month.

The usual response to the above argument is to improve communication by “smoothing it out”, i.e. by deprecating heroes since, it is argued, that encourages more communication across an entire project [1, 2, 3, 4, 5].

The results of the last section suggest that it is time to explore another response: the best way to reduce communication overhead and to decrease defects is to centralize the communicators. In our data, commits with lower defects come from the small number of hero developers who have learned how to talk to more people. Hence, we would encourage more research into better methods for rapid, high-volume, communication in a one-to-many setting (where the “one” is the hero and the “many” are everyone else).

5.2 Chief Programmer

One strange feature of our results is that what is old is now new. Our results (that heroes are important) echo a decades old concept. In 1975, Fred Brooks wrote of “surgical teams” and the “chief programmer”  [108]. He argued that -

  • Much as a surgical team during surgery is led by one surgeon performing the most critical work, while directing the team to assist with less critical parts.

  • Similarly, software projects should be led by one “chief programmer” to develop critical system components while the rest of a team provides what is needed at the right time.

Brooks conjecture that “good” programmers are generally much more as productive as mediocre ones. This can be seen in the results that hero programmers are much more productive and less likely to introduce bugs into the codebase. Heroes are born when developers become e so skilled at what they do, that they assume a central position in a project. In our view, organizations need to acknowledge their dependency on such heroes, perhaps altering their human resource policies and manage these people more efficiently by retaining them.

6 Threats to Validity

6.1 Sampling Bias

Our conclusions are based on 1100+ open source GitHub projects that started this analysis. It is possible that different initial projects would have lead to different conclusions. That said, our initial sample is very large so we have some confidence that this sample represents an interesting range of projects.

6.2 Evaluation Bias

In RQ1,RQ2 and RQ3, we said that there heroes are prevalent and responsible for far less bug introduction than non-hero developers. It is possible that, using other metrics like if heroes reduces productivity by becoming bottleneck, there may well be a difference in these different kinds of projects. But measuring people resources only by how fast releases are done or issues are fixed many not be a good indicator of measuring affects of having heroes in team. This is a matter that needs to be explored in future research.

6.3 Construct Validity

At various places in this report, we made engineering decisions about (e.g.) team size; and (e.g.) what constitutes a “hero” project. While those decisions were made using advice from the literature (e.g. [99]), we acknowledge that other constructs might lead to different conclusions.

6.4 External Validity

We have relied on natural language processor to analyze commit messages to mark them as buggy commits. These commit messages are created by the developers and may or may not contain proper indication of if they were used to fix some bugs. There is also a possibility that the team of that project might be using different syntax to enter in commit messages.

Similarly we have used GitHub issues and comments to create the communication graph, It is possible that the communication was not made using these online forums and was done with some other medium. To reduce the impact of this problem, we did take precautionary step to (e.g.,) include various tag identifiers of bug fixing commits, done some spot check on projects regarding communication etc.

7 Conclusion

The established wisdom in the literature is to depreciate “heroes”, i.e., a small percentage of the staff who are responsible for most of the progress on a project. But, based on a study of 1100+ open source GitHub projects, we assert:

  • Overwhelmingly, most projects are hero projects. This result holds true for small, medium, and large projects.

  • Hero developers are far less likely to introduce bugs into the codebase than their non-hero counterparts. Thus having heroes in projects significantly affects the code quality.

Our empirical results call for a revision of a long-held truism in software engineering. Software heroes are far more common and valuable than suggested by the literature, particularly from code quality perspective. Organizations should reflect on better ways to find and retain more of these software heroes.

More generally, we would comment that it is time to reflect more on long-held truisms in our field. Heroes are widely deprecated in the literature, yet empirically they are quite beneficial. What other statements in the literature need to be reviewed and revised?

References

  • [1] N. Bier, M. Lovett, and R. Seacord, “An online learning approach to information systems security education,” in Proceedings of the 15th Colloquium for Information Systems Security Education, 2011.
  • [2] B. Boehm, “A view of 20th and 21st century software engineering,” in Proceedings of the 28th international conference on Software engineering, pp. 12–29, ACM, 2006.
  • [3] G. W. Hislop, M. J. Lutz, J. F. Naveda, W. M. McCracken, N. R. Mead, and L. A. Williams, “Integrating agile practices into software engineering courses,” Computer science education, vol. 12, no. 3, pp. 169–185, 2002.
  • [4] S. Morcov, “Complex it projects in education: The challenge,” International Journal of Computer Science Research and Application, vol. 2, pp. 115–125, 2012.
  • [5] T. Wood-Harper and B. Wood, “Multiview as social informatics in action: past, present and future,” Information Technology & People, vol. 18, no. 1, pp. 26–32, 2005.
  • [6] B. Fitzgerald and D. L. Parnas, “Making free/open-source software (f/oss) work better,” in Proceedings do Workshop da Conferência XP2003, Genova, Citeseer, 2003.
  • [7] A. Agrawal, A. Rahman, R. Krishna, A. Sobran, and T. Menzies, “We don’t need another hero?,” Proceedings of the 40th International Conference on Software Engineering Software Engineering in Practice - ICSE-SEIP ’18, 2018.
  • [8] F. Ricca and A. Marchetto, “Are heroes common in floss projects?,” in Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, p. 55, ACM, 2010.
  • [9] G. Robles and J. M. Gonzalez-Barahona, “Contributor turnover in libre software projects,” in IFIP International Conference on Open Source Systems, pp. 273–286, Springer, 2006.
  • [10] A. Capiluppi, J. M. Gonzalez Barahona, and I. Herraiz, “Adapting the “staged model for software evolution” to floss,” 2007.
  • [11] F. P. Brooks, Jr., “The mythical man-month,” SIGPLAN Not., vol. 10, pp. 193–, Apr. 1975.
  • [12] A. Cockburn, Agile software development: the cooperative game. Pearson Education, 2006.
  • [13] J. Bach, “Enough about process: what we need are heroes,” IEEE Software, vol. 12, no. 2, pp. 96–98, 1995.
  • [14] A. Mockus, R. T. Fielding, and J. D. Herbsleb, “Two case studies of open source software development: Apache and mozilla,” ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 11, no. 3, pp. 309–346, 2002.
  • [15] V. R. Basili, L. C. Briand, and W. L. Melo, “A validation of object-oriented design metrics as quality indicators,” IEEE Transactions on Software Engineering, vol. 22, pp. 751–761, Oct 1996.
  • [16] W. Li and S. Henry, “Object-oriented metrics that predict maintainability,” Journal of Systems and Software, vol. 23, no. 2, pp. 111 – 122, 1993. Object-Oriented Software.
  • [17] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy, “Cross-project defect prediction: A large scale experiment on data vs. domain vs. process,” in Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, ESEC/FSE ’09, (New York, NY, USA), pp. 91–100, ACM, 2009.
  • [18] K. El Emam, S. Benlarbi, N. Goel, and S. N. Rai, “The confounding effect of class size on the validity of object-oriented metrics,” IEEE Transactions on Software Engineering, vol. 27, pp. 630–650, July 2001.
  • [19] L. C. Briand, J. Wust, S. V. Ikonomovski, and H. Lounis, “Investigating quality factors in object-oriented designs: an industrial case study,” in Proceedings of the 1999 International Conference on Software Engineering (IEEE Cat. No.99CB37002), pp. 345–354, May 1999.
  • [20] C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu, “Don’t touch my code!: Examining the effects of ownership on software quality,” in Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE ’11, (New York, NY, USA), pp. 4–14, ACM, 2011.
  • [21] P. Bhattacharya, M. Iliofotou, I. Neamtiu, and M. Faloutsos, “Graph-based analysis and prediction for software evolution,” in Proceedings of the 34th International Conference on Software Engineering, ICSE ’12, (Piscataway, NJ, USA), pp. 419–429, IEEE Press, 2012.
  • [22] R. M. Bell, T. J. Ostrand, and E. J. Weyuker, “The limited impact of individual developer data on software defect prediction,” Empirical Software Engineering, vol. 18, pp. 478–505, Jun 2013.
  • [23] C. Kumar and D. K. Yadav, “Software defects estimation using metrics of early phases of software development life cycle,” International Journal of System Assurance Engineering and Management, vol. 8, pp. 2109–2117, Dec 2017.
  • [24] J. Jiarpakdee, C. Tantithamthavorn, and A. E. Hassan, “The impact of correlated metrics on defect models,” 2018.
  • [25] J. Ludwig, S. Xu, and F. Webber, “Static software metrics for reliability and maintainability,” in 2018 IEEE/ACM International Conference on Technical Debt (TechDebt), pp. 53–54, May 2018.
  • [26]

    R. Jayanthi and L. Florence, “Software defect prediction techniques using metrics based on neural network classifier,”

    Cluster Computing, Feb 2018.
  • [27] D. L. Gupta and K. Saxena, “Software bug prediction using object-oriented metrics,” Sādhanā, vol. 42, pp. 655–669, May 2017.
  • [28] C. F. Kemerer and M. C. Paulk, “The impact of design and code reviews on software quality: An empirical study based on psp data,” IEEE transactions on software engineering, vol. 35, no. 4, pp. 534–550, 2009.
  • [29] F. Zhang, A. Mockus, I. Keivanloo, and Y. Zou, “Towards building a universal defect prediction model,” in Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, (New York, NY, USA), pp. 182–191, ACM, 2014.
  • [30] T. Wolf, A. Schroter, D. Damian, and T. Nguyen, “Predicting build failures using social network analysis on developer communication,” in Proceedings of the 31st International Conference on Software Engineering, pp. 1–11, IEEE Computer Society, 2009.
  • [31] C. R. de Souza, D. Redmiles, L.-T. Cheng, D. Millen, and J. Patterson, “Sometimes you need to see through walls: a field study of application programming interfaces,” in Proceedings of the 2004 ACM conference on Computer supported cooperative work, pp. 63–71, ACM, 2004.
  • [32] R. E. Grinter, J. D. Herbsleb, and D. E. Perry, “The geography of coordination: Dealing with distance in r&d work,” in Proceedings of the international ACM SIGGROUP conference on Supporting group work, pp. 306–315, ACM, 1999.
  • [33] J. D. Herbsleb and R. E. Grinter, “Splitting the organization and integrating the code: Conway’s law revisited,” in Proceedings of the 1999 International Conference on Software Engineering (IEEE Cat. No. 99CB37002), pp. 85–95, IEEE, 1999.
  • [34] M. Cataldo and J. D. Herbsleb, “Coordination breakdowns and their impact on development productivity and software failures,” IEEE Transactions on Software Engineering, vol. 39, no. 3, pp. 343–360, 2013.
  • [35] M. Cataldo, M. Bass, J. D. Herbsleb, and L. Bass, “On coordination mechanisms in global software development,” in International Conference on Global Software Engineering (ICGSE 2007), pp. 71–80, IEEE, 2007.
  • [36] K. Peterson, “The github open source development process,” 12 2013.
  • [37] S. Koch and G. Schneider, “Effort, co-operation and co-ordination in an open source software project: Gnome,” Information Systems Journal, vol. 12, no. 1, pp. 27–42, 2002.
  • [38] S. Krishnamurthy, “Cave or community? an empirical examination of 100 mature open source projects (originally published in volume 7, number 6, june 2002),” First Monday, vol. 0, no. 0, 2005.
  • [39] G. Robles, J. M. Gonzalez-Barahona, and I. Herraiz, “Evolution of the core team of developers in libre software projects,” in Mining Software Repositories, 2009. MSR’09. 6th IEEE International Working Conference on, pp. 167–170, IEEE, 2009.
  • [40] L. Augustin, D. Bressler, and G. Smith, “Accelerating software development through collaboration,” in Proceedings of the 24th International Conference on Software Engineering. ICSE 2002, pp. 559–563, May 2002.
  • [41] J. Whitehead, “Collaboration in software engineering: A roadmap,” in Future of Software Engineering (FOSE ’07), pp. 214–225, May 2007.
  • [42] L. F. Dias, I. Steinmacher, G. Pinto, D. A. da Costa, and M. Gerosa, “How does the shift to github impact project collaboration?,” in Software Maintenance and Evolution (ICSME), 2016 IEEE International Conference on, pp. 473–477, IEEE, 2016.
  • [43] V. Cosentino, J. L. C. Izquierdo, and J. Cabot, “A systematic mapping study of software development with github,” IEEE Access, vol. 5, pp. 7173–7192, 2017.
  • [44] A. Moniruzzaman and D. S. A. Hossain, “Comparative study on agile software development methodologies,” arXiv preprint arXiv:1307.3356, 2013.
  • [45] A. Rastogi, N. Nagappan, and P. Jalote, Empirical analyses of software contributor productivity. PhD thesis, IIIT-Delhi, 2017.
  • [46] O. Jarczyk, B. Gruszka, S. Jaroszewicz, L. Bukowski, and A. Wierzbicki, “Github projects. quality analysis of open-source software,” in International Conference on Social Informatics, pp. 80–94, Springer, 2014.
  • [47] T. F. Bissyandé, D. Lo, L. Jiang, L. Réveillere, J. Klein, and Y. Le Traon, “Got issues? who cares about it? a large scale investigation of issue trackers from github,” in Software Reliability Engineering (ISSRE), 2013 IEEE 24th International Symposium on, pp. 188–197, IEEE, 2013.
  • [48] D. Athanasiou, A. Nugroho, J. Visser, and A. Zaidman, “Test code quality and its relation to issue handling performance,” IEEE Transactions on Software Engineering, vol. 40, no. 11, pp. 1100–1125, 2014.
  • [49] M. Gupta, A. Sureka, and S. Padmanabhuni, “Process mining multiple repositories for software defect resolution from control and organizational perspective,” in Proceedings of the 11th Working Conference on Mining Software Repositories, pp. 122–131, ACM, 2014.
  • [50] A. Reyes López, “Analyzing github as a collaborative software development platform: A systematic review,” 2017.
  • [51] L. C. Briand, J. Wüst, J. W. Daly, and D. V. Porter, “Exploring the relationships between design measures and software quality in object-oriented systems,” Journal of Systems and Software, vol. 51, no. 3, pp. 245 – 273, 2000.
  • [52] N. Nagappan, T. Ball, and A. Zeller, “Mining metrics to predict component failures,” in Proceedings of the 28th International Conference on Software Engineering, ICSE ’06, (New York, NY, USA), pp. 452–461, ACM, 2006.
  • [53] R. Subramanyam and M. S. Krishnan, “Empirical analysis of ck metrics for object-oriented design complexity: implications for software defects,” IEEE Transactions on Software Engineering, vol. 29, pp. 297–310, April 2003.
  • [54] T. Zimmermann, R. Premraj, and A. Zeller, “Predicting defects for eclipse,” in Proceedings of the Third International Workshop on Predictor Models in Software Engineering, PROMISE ’07, (Washington, DC, USA), pp. 9–, IEEE Computer Society, 2007.
  • [55] T. J. Ostrand, E. J. Weyuker, and R. M. Bell, “Predicting the location and number of faults in large software systems,” IEEE Transactions on Software Engineering, vol. 31, pp. 340–355, April 2005.
  • [56] K. E. Emam, W. Melo, and J. C. Machado, “The prediction of faulty classes using object-oriented design metrics,” Journal of Systems and Software, vol. 56, no. 1, pp. 63 – 75, 2001.
  • [57] M. Cartwright and M. Shepperd, “An empirical investigation of an object-oriented software system.,” Software Engineering, IEEE Transactions on, vol. 26, pp. 786 – 796, 09 2000.
  • [58]

    K. O. Elish and M. O. Elish, “Predicting defect-prone software modules using support vector machines,”

    Journal of Systems and Software, vol. 81, no. 5, pp. 649 – 660, 2008.
    Software Process and Product Measurement.
  • [59] and and, “An empirical study on object-oriented metrics,” in Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403), pp. 242–249, Nov 1999.
  • [60] Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, “Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities,” IEEE Transactions on Software Engineering, vol. 37, pp. 772–787, Nov 2011.
  • [61] T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener, “Defect prediction from static code features: current results, limitations, new approaches,” Automated Software Engineering, vol. 17, pp. 375–407, Dec 2010.
  • [62] Q. Song, Z. Jia, M. Shepperd, S. Ying, and J. Liu, “A general software defect-proneness prediction framework,” IEEE Transactions on Software Engineering, vol. 37, pp. 356–370, May 2011.
  • [63] M. Jureczko and L. Madeyski, “Towards identifying software project clusters with regard to defect prediction,” in Proceedings of the 6th International Conference on Predictive Models in Software Engineering, PROMISE ’10, (New York, NY, USA), pp. 9:1–9:10, ACM, 2010.
  • [64] N. Fenton, M. Neil, W. Marsh, P. Hearty, D. Marquez, P. Krause, and R. Mishra, “Predicting software defects in varying development lifecycles using bayesian nets,” Information and Software Technology, vol. 49, no. 1, pp. 32 – 43, 2007. Most Cited Journal Articles in Software Engineering - 2000.
  • [65] M. Pinzger, N. Nagappan, and B. Murphy, “Can developer-module networks predict failures?,” in SIGSOFT FSE, 2008.
  • [66] A. Meneely, L. A. Williams, W. Snipes, and J. A. Osborne, “Predicting failures with developer networks and social network analysis,” in SIGSOFT FSE, 2008.
  • [67] T. Wolf, A. Schroter, D. Damian, and T. Nguyen, “Predicting build failures using social network analysis on developer communication,” in Proceedings of the 31st International Conference on Software Engineering, ICSE ’09, (Washington, DC, USA), pp. 1–11, IEEE Computer Society, 2009.
  • [68] and, S. Christley, and G. Madey, “A topological analysis of the open souce software development community,” in Proceedings of the 38th Annual Hawaii International Conference on System Sciences, pp. 198a–198a, Jan 2005.
  • [69] C. Bird, N. Nagappan, H. Gall, B. Murphy, and P. Devanbu, “Putting it all together: Using socio-technical networks to predict failures,” in 2009 20th International Symposium on Software Reliability Engineering, pp. 109–119, Nov 2009.
  • [70] A. B. Binkley and S. R. Schach, “Validation of the coupling dependency metric as a predictor of run-time failures and maintenance measures,” in Proceedings of the 20th International Conference on Software Engineering, pp. 452–455, April 1998.
  • [71] E. J. Weyuker, T. J. Ostrand, and R. M. Bell, “Do too many cooks spoil the broth? using the number of developers to enhance defect prediction models,” Empirical Software Engineering, vol. 13, pp. 539–559, Oct 2008.
  • [72]

    A. Okutan and O. T. Yıldız, “Software defect prediction using bayesian networks,”

    Empirical Software Engineering, vol. 19, pp. 154–181, Feb 2014.
  • [73]

    P. Knab, M. Pinzger, and A. Bernstein, “Predicting defect densities in source code files with decision tree learners,” in

    Proceedings of the 2006 International Workshop on Mining Software Repositories, MSR ’06, (New York, NY, USA), pp. 119–125, ACM, 2006.
  • [74] P. He, B. Li, X. Liu, J. Chen, and Y. Ma, “An empirical study on software defect prediction with a simplified metric set,” Information and Software Technology, vol. 59, pp. 170 – 190, 2015.
  • [75] A. Majumder, S. Datta, and K. Naidu, “Capacitated team formation problem on social networks,” Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’12, 2012.
  • [76] J. Ratzinger, T. Sigmund, and H. C. Gall, “On the relation of refactorings and software defect prediction,” in Proceedings of the 2008 International Working Conference on Mining Software Repositories, MSR ’08, (New York, NY, USA), pp. 35–38, ACM, 2008.
  • [77] S. McIntosh, Y. Kamei, B. Adams, and A. E. Hassan, “An empirical study of the impact of modern code review practices on software quality,” Empirical Software Engineering, vol. 21, pp. 2146–2189, Oct 2016.
  • [78] G. Madey, V. Freeh, and R. Tynan, “The open source software development phenomenon: An analysis based on social network theory,” Americas Conference on Information Systems, 01 1806.
  • [79] E. Kupiainen, M. V. Mäntylä, and J. Itkonen, “Using metrics in agile and lean software development – a systematic literature review of industrial studies,” Information and Software Technology, vol. 62, pp. 143 – 163, 2015.
  • [80] L. Madeyski and M. Jureczko, “Which process metrics can significantly improve defect prediction models? an empirical study,” Software Quality Journal, vol. 23, pp. 393–422, Sep 2015.
  • [81] A. Lima, L. Rossi, and M. Musolesi, “Coding together at scale: Github as a collaborative social network,” 2014.
  • [82] T. J. Ostrand, E. J. Weyuker, and R. M. Bell, “Programmer-based fault prediction,” in Proceedings of the 6th International Conference on Predictive Models in Software Engineering, PROMISE ’10, (New York, NY, USA), pp. 19:1–19:10, ACM, 2010.
  • [83] R. Abreu and R. Premraj, “How developer communication frequency relates to bug introducing changes,” in Proceedings of the Joint International and Annual ERCIM Workshops on Principles of Software Evolution (IWPSE) and Software Evolution (Evol) Workshops, IWPSE-Evol ’09, (New York, NY, USA), pp. 153–158, ACM, 2009.
  • [84] A. Jermakovics, A. Sillitti, and G. Succi, “Mining and visualizing developer networks from version control systems,” in Proceedings of the 4th International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE ’11, (New York, NY, USA), pp. 24–31, ACM, 2011.
  • [85] G. Concas, M. Marchesi, A. Murgia, and R. Tonelli, “An empirical study of social networks metrics in object-oriented software,” Adv. Soft. Eng., vol. 2010, pp. 4:1–4:21, Jan. 2010.
  • [86] S. Biçer, A. B. Bener, and B. Çağlayan, “Defect prediction using social network analysis on issue repositories,” in Proceedings of the 2011 International Conference on Software and Systems Process, ICSSP ’11, (New York, NY, USA), pp. 63–71, ACM, 2011.
  • [87]

    V. Udaya B. Challagulla, F. B. Bastani, I.-L. Yen, and R. A. Paul, “Empirical assessment of machine learning based software defect prediction techniques.,”

    International Journal on Artificial Intelligence Tools

    , vol. 17, pp. 389–400, 04 2008.
  • [88] F. Zhang, A. E. Hassan, S. McIntosh, and Y. Zou, “The use of summation to aggregate software metrics hinders the performance of defect prediction models,” IEEE Transactions on Software Engineering, vol. 43, pp. 476–491, May 2017.
  • [89] M. Prasad, L. Florence, and A. Arya, “A study on software metrics based software defect prediction using data mining and machine learning techniques,” International Journal of Database Theory and Application, vol. 7, pp. 179–190, 06 2015.
  • [90]

    T. Ravi Kumar, T. Srinivasa Rao, and S. Bathini, “A predictive approach to estimate software defects density using weighted artificial neural networks for the given software metrics,” in

    Smart Intelligent Computing and Applications (S. C. Satapathy, V. Bhateja, and S. Das, eds.), (Singapore), pp. 449–457, Springer Singapore, 2019.
  • [91] K. Prasad, M. Divya, and N. Mangala, “Statistical analysis of metrics for software quality improvement,” 2018.
  • [92] S. Dahab, E. F. S. Balocchi, S. Maag, A. R. Cavalli, and W. Mallouli, “Enhancing software development process quality based on metrics correlation and suggestion,” in ICSOFT 2018: 13th International Conference on Software Technologies, pp. 120–131, Scitepress, 2018.
  • [93] T. J. Vijay, D. M. G. Chand, and D. H. Done, “Software quality metrics in quality assurance to study the impact of external factors related to time,” International Journal of Advanced Research in Computer Science and Software Engineering, vol. 7, no. 1, 2017.
  • [94] J. Suzuki and E. D. Canedo, “Interaction design process oriented by metrics,” in HCI International 2018 – Posters’ Extended Abstracts (C. Stephanidis, ed.), (Cham), pp. 290–297, Springer International Publishing, 2018.
  • [95] M. Xenos, “Software metrics and measurements,” Encyclopedia of E-Commerce, E-Government and Mobile Commerce, pp. 1029–1036, 01 2006.
  • [96] E. Kalliamvakou, G. Gousios, K. Blincoe, L. Singer, D. M. German, and D. Damian, “The promises and perils of mining github,” in Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, (New York, NY, USA), pp. 92–101, ACM, 2014.
  • [97] N. Munaiah, S. Kroh, C. Cabrey, and M. Nagappan, “Curating github for engineered software projects,” Empirical Software Engineering, vol. 22, pp. 3219–3253, Dec 2017.
  • [98] C. Williams and J. Spacco, “Szz revisited: verifying when changes induce fixes,” in Proceedings of the 2008 workshop on Defects in large software systems, pp. 32–36, ACM, 2008.
  • [99] A. Gautam, S. Vishwasrao, and F. Servant, “An empirical study of activity, popularity, size, testing, and stability in continuous integration,” in Proceedings of the 14th International Conference on Mining Software Repositories, pp. 495–498, IEEE Press, 2017.
  • [100] J. Herbsleb, “Socio-technical coordination (keynote),” in Companion Proceedings of the 36th International Conference on Software Engineering, ICSE Companion 2014, (New York, NY, USA), pp. 1–1, ACM, 2014.
  • [101] A. Cockburn and J. Highsmith, “Agile software development, the people factor,” Computer, vol. 34, pp. 131–133, Nov 2001.
  • [102] R. E. Kraut and L. A. Streeter, “Coordination in software development,” Commun. ACM, vol. 38, pp. 69–81, Mar. 1995.
  • [103] J. D. Herbsleb and A. Mockus, “An empirical study of speed and communication in globally distributed software development,” IEEE Transactions on Software Engineering, vol. 29, pp. 481–494, June 2003.
  • [104] D. Tesch, M. G. Sobol, G. Klein, and J. J. Jiang, “User and developer common knowledge: Effect on the success of information system development projects,” International Journal of Project Management, vol. 27, no. 7, pp. 657 – 664, 2009.
  • [105] T. Girba, A. Kuhn, M. Seeberger, and S. Ducasse, “How developers drive software evolution,” vol. 2005, pp. 113– 122, 10 2005.
  • [106] T. C. Lethbridge, “What knowledge is important to a software professional?,” Computer, vol. 33, pp. 44–50, May 2000.
  • [107] F. P. Brooks Jr, The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition, 2/E. Pearson Education India, 1995.
  • [108] F. P. Brooks, “The mythical man-month,” Datamation, vol. 20, no. 12, pp. 44–52, 1974.