Continuous integration (CI) tools integrate code changes by automatically compiling, building, and executing test cases upon submission of code changes (Duvall et al., 2007). In recent years, usage of CI tools have become increasingly popular both for open source software (OSS) projects (Beller et al., 2017) (Hilton et al., 2016) (Vasilescu et al., 2015) as well as for proprietary projects (Stahl et al., 2017).
Our industrial partner adopted CI to improve their software development process. Our industrial partner’s expectation was that similar to OSS projects (Zhao et al., 2017) (Vasilescu et al., 2015), CI would positively influence resolution of bugs and issues for projects owned by our industrial partner. Our industrial partner also expected collaboration to increase upon adoption of CI. Being one of the primary Extreme Programming (XP) practices (Beck, 2000), CI is expected to benefit collaboration amongst team members (Sharp and Robinson, 2008).
We conduct an empirical study to investigate if our industrial partner’s expectations were fulfilled. Such an empirical study can be beneficial in the following ways: (i) to quantify if CI benefits projects with respect to bug and issue resolution, along with collaboration; and (ii) to derive lessons that industry practitioners should keep in mind when using CI. We conduct an empirical study with 150 OSS and 123 proprietary projects to quantify the influence of CI on bug resolution, collaboration, and issue resolution. We answer the following research questions:
RQ1: Does adoption of continuous integration influence commit patterns? Commit frequency and sizes significantly increases for OSS projects after CI adoption but not for our set of proprietary projects.
RQ2: How does adoption of continuous integration influence collaboration amongst team members? After adopting CI, collaboration significantly increases for both, OSS and our set of proprietary projects. The increase in collaboration is more observable for OSS projects than the proprietary projects.
RQ3: How does adoption of continuous integration influence bug and issue resolution? Significantly more bugs and issues are resolved after adoption of CI for OSS projects, but not for our set of proprietary projects.
In summary, we observe usage of CI to be beneficial for OSS projects but not for our set of proprietary projects. For proprietary projects, we acknowledge that there may be benefits to CI which are not captured by our study, for example, cultural benefits in adopting CI tools. Findings from our paper can help industry practitioners revise their expectations about the benefits of CI. Our paper may also help to identify possible strategies to fully reap the benefits of CI.
We first provide a brief background on CI, then describe prior research work related to CI.
2.1. About Continuous Integration (CI)
CI is identified as one of the primary practices to implement XP (Beck, 2000). According to Duvall et al. (Duvall et al., 2007), CI originated from the imperatives of agility, in order to respond to customer requests quickly. When building the source code, CI tools can execute unit and integration tests to ensure quality of the integrated source code. If the tests do not pass, CI tools can be customized to give feedback on the submitted code changes. Even though the concept of CI was introduced in 2006, initial usage of CI was not popular amongst practitioners (Deshpande and Riehle, 2008). However, since 2011, with the advent of CI tools such as Travis CI (CI, 2017), usage of CI has increased in recent years (Hilton et al., 2016).
When a software team adopts CI, the team has to follow a set of practices (Duvall et al., 2007). According to the CI methodology all programmers have to check-in their code daily, which are integrated daily (Duvall et al., 2007). Unlike, traditional methodologies such as waterfall, in CI, programmers get instant feedback on their code via build results. To implement CI, the team must maintain its source code in a version control system (VCS), and integrate the VCS with the CI tool so that builds are triggered upon submission of each commit (Duvall et al., 2007). Figure 1 provides an example on how a typical CI process works. Programmer make commits in a repository maintained by a VCS such as, GitHub, and these commits trigger CI jobs on a CI tool such as Travis CI which executes, builds, tests, and produces build results. These build results are provided to the programmers as a feedback either through e-mails, or phone alerts (Duvall et al., 2007) on their submitted code changes. Based on the build results, programmers make necessary changes to their code, and repeats the CI process again.
2.2. Related Work
Our paper is closely related to prior research that have investigated usage of CI tools. We briefly describe these prior work as following.
Adoption: Hilton et al. (Hilton et al., 2016) mined OSS projects hosted on Github. They observed that most popular projects use CI, and reported that the median time of CI adoption is one year. They also advocated for wide-spread adoption of CI, as CI correlates with several positive outcomes. However, adoption of CI is non-trivial as suggested by other prior work; for example, Olsson et al. (Olsson et al., 2012) who identified lack of automated testing frameworks as a key barrier to transition from a traditional software process to a CI-based software process. Also, Hilton et al. (Hilton et al., 2017) surveyed industrial practitioners and identified three trade-offs to adopt CI: assurance, flexibility, and security. Rahman et al. (Rahman et al., 2017) observed that adoption of CI is not wide-spread amongst practitioners. They investigated which diffusion of innovation (DOI) factors influence adoption of CI tools, and reported four factors: relative advantages, compatibility, complexity, and education.
Usage: Beller et al. (Beller et al., 2017) collected and analyzed Java and Ruby-based projects from Github, and synthesized the nature of build and test attributes exhibited amongst OSS projects that use CI. Vasilescu et al. (Vasilescu et al., 2015) analyzed OSS GitHub projects that use Travis CI, and reported that adoption of CI increases productivity for OSS projects. Zhao et al. (Zhao et al., 2017) mined OSS GitHub projects, and investigated if software development practices such as commit frequency, commit size, and pull request handling, changes after adoption of CI.
The above-mentioned findings highlight the community’s interest in how CI is being used in software projects. From the above-mentioned prior work, we can list the following as exemplars of the expected benefits of adopting CI:
Note that all of these findings are derived from OSS projects. With respect to the development process, structure, and complexity, proprietary projects are different from OSS projects (Paulson et al., 2004) (Robinson and Francis, 2010), which motivates us to pursue our research study. Hence, for the rest of this paper, we will compare the influence of adopting CI within OSS and our set of proprietary projects. We consider the following attributes of software development: bug resolution, collaboration amongst team members, commit patterns, and issue resolution.
In this section, we describe our methodology to filter datasets, followed by metrics and statistical measurements that we use to answer our research questions.
Projects that are hosted on GitHub provides researchers the opportunity to extract necessary project information such as commits, and issues (Kalliamvakou et al., 2014) (Bird et al., 2009). Unfortunately, these projects can contain short development activity, and not be related to software development at all (Kalliamvakou et al., 2014) (Bird et al., 2009). Hence, we need to curate a set of projects that can contain sufficient software development data for analysis. We apply a filtering strategy that can be described in the following manner:
Filter-1 (General): As the first step of filtering, we identify projects that contain sufficient software development information using the criteria used by prior research (Agrawal et al., 2018) (Krishna et al., 2018). By applying these filtering criteria we mitigate the limitations of mining GitHub projects stated by prior researchers (Kalliamvakou et al., 2014) (Bird et al., 2009).
Filter-2 (CI) We use the second filter to identify projects that have adopted CI tools.
CI Tool Usage: The project must use any one of the following tools: Circle CI, Jenkins, and Travis CI. We select these tools as these tools are frequently used in GitHub projects (Hilton et al., 2016). We determine if a project is using Circle CI, Jenkins, and Travis CI by inspecting the existence of ‘circle.yml’, ‘jenkins.yml’, and ‘travis.yml’, respectively, in the root directory of the project.
Start Date: The project must start on or after January, 2014. From our initial exploration we observe that 90% of the collected proprietary projects start on or after 2014.
We use the metrics presented in Table 1 to answer our research questions. The ‘Metric Name’ column presents the metrics, and the ‘Equation’ presents the corresponding equation for each metric.
|Metric Name||Equation||Brief Description|
|Proportion of Closed Issues ()||
|Count of closed issues per month|
|Normalized Proportion of Closed Issues ()||
|normalized by time|
|Proportion of Closed Bugs ()||
|Count of closed bugs per month|
|Normalized Proportion of Closed Bugs ()||
|normalized by time|
|Count of Non-Merge Commits ()||
|Count of non-merge commits per month, normalized by the number of programmers|
|Normalized Count of Commits ()||
|normalized by time|
|Commit size ()||
|Total lines of code added and deleted per commit within a month|
|Normalized Commit Size ()||
|normalized by time|
|Median In-degree ()||
|In-degree corresponds to collaboration between the programmers. The higher the median in-degree, the higher connection is between the nodes (Bhattacharya et al., 2012), indicating more collaboration between the programmers.|
|Normalized Median In-degree ()||
|normalized by time|
According to Table 1, the metrics Normalized Proportion of Closed Issues (), Normalized Proportion of Closed Bugs (), Normalized Count of Commits (), Normalized Commit Size (), and Normalized Median In-degree () are normalized by . Here, presents the count of months before or after adoption of CI for a project. For example, if the number of months before and after adoption of CI is respectively, 20 and 30 then, we use Equation 2 with to calculate the project’s before adoption of CI, and with , to calculate the project’s after adoption of CI. In a similar fashion, we calculate , , , and by using i.e., months before or after adoption of CI.
Figure 2 provides a hypothetical example to calculate metric ‘Median In Degree’. We observe a list of programmers who are authoring and modifying two files. We construct a graph, using the modification information, as shown in Figure (b)b. The constructed graph has three nodes (P1, P2, and P3), and three edges. In our hypothetical example, the project’s collaboration graph has three edges, and the in-degree for nodes P1, P2, and P3 is one. Therefore, the median in-degree for the collaboration graph is one.
3.3. Statistical Measurements
We use three statistical measures to compare the metrics of interest before and after adoption of CI: effect size using Cliff’s Delta (Cliff, 1993), the Mann-Whitney U test (Mann and Whitney, 1947), and the ‘delta ()’ measure. Both, Mann-Whitney U test and Cliff’s Delta are non-parametric. The Mann-Whitney U test states if one distribution is significantly large/smaller than the other, whereas effect size using Cliff’s Delta measures how large the difference is. Following convention, we report a distribution to be significantly larger than the other if . We use Romano et al.’s recommendations to interpret the observed Cliff’s Delta values. According to Romano et al. (Romano et al., 2006), the difference between two groups is ‘large’ if Cliff’s Delta is greater than 0.47. A Cliff’s Delta value between 0.33 and 0.47 indicates a ‘medium’ difference. A Cliff’s Delta value between 0.14 and 0.33 indicates a ‘small’ difference. Finally, a Cliff’s Delta value less than 0.14 indicates a ‘negligible’ difference.
We also report ‘delta ()’, which is the difference between the median values, before and after adoption of CI. The ‘delta’ measurement quantifies the proportion of increase or decrease, after and before adoption of CI. As a hypothetical example, for OSS projects, if median is 10.0, and 8.5, respectively, after and before adoption of CI, then the ‘delta ()’ is +0.17 (= (10-8.5)/8.5).
Before providing the answers to the research questions, we present summary statistics of the studied projects. Initially we started with 1,108 OSS projects and 538 proprietary projects. Upon applying Filter-1 we are left with 661 open source and 171 proprietary projects. As shown in Table 2, after applying Filter-2, we are finally left with 150 OSS and 123 proprietary projects. We use these projects to answer the three research questions. A brief summary of the filtered projects is presented in Table 3. The commit count per programmer is 24.2 and 46.7, respectively for OSS and proprietary projects. On average a programmer changes 141 and 345 files, respectively for OSS and proprietary projects.
|CI Tool Usage||448||46|
|Start Date (Must start on or after 2014)||63||2|
|Project count after filtering||150||123|
|Total Changed Files||1,122,352||728,733|
|Total LOC Added||48,424,888||44,003,385|
|Total LOC Deleted||30,225,543||26,614,230|
4.1. Answer to RQ1: Does adoption of continuous integration influence commit patterns?
Zhao et al. (Zhao et al., 2017) mined OSS GitHub projects, and reported that after adoption of CI, frequency of commits increases. We expect that our answers to RQ1 for OSS projects will be consistent with Zhao et al.’s (Zhao et al., 2017) findings. We answer RQ1, by first reporting the frequency of commits before and after adoption of CI. We report the results of the three statistical measures in Table 4 and the box-plots in Figure 3. The ‘delta’ metric is represented in the row. The ‘delta’ value for which we observe no significant difference is highlighted in grey.
Our findings indicate that for proprietary projects, programmers are not making frequent commits after adoption of CI. On the contrary for OSS projects programmers are making significantly more commits, confirming findings from prior research (Zhao et al., 2017).
Commit size is another measure we use to answer RQ1. As shown in Table 4 we observe size of commits i.e., churned lines of code per commit to significantly increase for OSS projects, but not for proprietary projects.
Answer to RQ1: After adoption of CI, normalized commit frequency and commit size significantly increases for our set of OSS projects, but not for our set of proprietary projects. For proprietary projects we do not observe CI to have an influence on normalized commit frequency and commit size.
|Commit Count ()||Commit Size ()|
|Median||(A:2.2, B:0.9)||(A:0.7, B:1.1)||(A:25.2, B:10.5)||(A:14.6, B:23.8)|
4.2. Answer to RQ2: How does adoption of continuous integration influence collaboration amongst team members?
As described in Section 3.2, we report the normalized median in-degree (NMID) to answer RQ2. We report the summary statistics in Table 5, and the box-plots in Figure 4. For both OSS and proprietary projects, the median in-degree significantly increases after adoption of CI. The effect size for OSS and proprietary projects is 0.2, which is small according to Romano et al (Romano et al., 2006). Based on the ‘delta’ measure ( in Table 5) we observe that the increase in collaboration is not as high for proprietary projects, as it is for OSS projects.
Answer to RQ2: After adoption of CI, normalized collaboration amount between programmers significantly increases for our set of OSS and proprietary projects. That said, increase in collaboration is larger for OSS projects, compared to proprietary projects.
|Median||(A: 0.09, B:0.05)||(A: 0.11, B:0.07)|
4.3. Answer to RQ3: How does adoption of continuous integration influence bug and issue resolution?
We answer RQ3 by reporting the summary statistics of number of issues that are closed () and number of closed bugs (), before and after adoption of CI. In Figures (a)a and (b)b, we respectively report the values for our set of OSS and proprietary projects.
In Table 6, we report the results of the three statistical measures: the Mann-Whitney U test, effect size, and the ‘delta’ measure. The ‘delta’ value for which we observe no significant difference is highlighted in grey. According to Table 6, for OSS projects, after adoption of CI, significantly more issues are closed (). On the contrary, for proprietary projects, the influence of CI is not observable for issue resolution. In OSS projects, considering median, the normalized count of closed issues, increases by a factor of 2.4, after adoption of CI, whereas, the normalized count of closed issues almost remains the same for proprietary projects. Our OSS-related findings are consistent with Zhao et al. (Zhao et al., 2017).
We report the normalized count of closed bugs () in Figures (c)c and (d)d, respectively, for our set of OSS and proprietary projects. We report the results of the three statistical measures in Table 6. According to Table 6, for OSS projects, after adoption of CI, significantly more bugs are closed (). From Figures (c)c and (d)d we observe the median to be 0.15 and 0.03, respectively for after and before adoption of CI. Hence, we can state that for OSS projects, bugs are closed five times more after adoption of CI. Similar to issue resolution, our OSS-related findings for bug resolution is somewhat consistent with prior research (Vasilescu et al., 2015). We also do not observe CI to influence bug resolution for proprietary projects.
Answer to RQ3: For OSS projects, significantly more normalized issues and bugs are resolved after adoption of CI. For our set of proprietary projects, adoption of CI has no influence on issue and bug resolution.
|Median||(A:0.31, B:0.13)||(A:0.06, B:0.7)||(A:0.15, B:0.03)||(A:0.03, B:0.04)|
Summary of the Empirical Study: We do not observe the expected benefits of CI for proprietary projects. Unlike OSS projects, after adoption of CI, bug and issue resolution does not increase for our set of proprietary projects. Based on our findings, we advise industry practitioners to revise their expectations about the benefits of CI, as only adoption of CI may not be enough to fully reap the benefits of CI.
In this section, we discuss our findings with possible implications:
The Practice of Making Frequent Commits: Our findings suggest that only adoption of CI tools may not be enough to reap the benefits of CI. As described in Section 4, we observe that CI have no influence on bug and issue resolution for proprietary projects. We caution industry practitioners to be wary of the expected benefits from CI adoption, as only adopting and using CI may not be enough to fulfill their expectations. One possible explanation can be attributed to programmers’ practice of making less frequent commits which we explain below.
Standard practice in CI is to use a version control system (e.g., Git). When a programmer makes a commit, the CI tool fetches the code changes, triggers a build that includes inspection checks and/or tests (Duvall et al., 2007). If the build fails the CI tool provides rapid feedback on which code changes are not passing the inspection checks and/or test cases (Duvall et al., 2007). In this manner, the CI process provides rapid feedback about code changes to the programmer (Duvall et al., 2007). The programmer utilizes this feedback to fix the code changes by making more commits, fixing their code changes, eventually leading to more bug fixes and issue completions. Hence, by making more commits, programmers might resolve more bugs and issues. Our explanation related to feedback is congruent with Duvall et al. (Duvall et al., 2007); they stated “rapid feedback is at the heart of CI” and “without feedback, none of the other aspects of CI is useful”.
On the contrary to OSS projects, after CI adoption, we have observed that in proprietary projects, change in commit frequency, number of closed bugs, and number of closed issues is non-significant. Based on above-mentioned explanation, we conjecture that for the proprietary projects, programmers are not relying on CI for feedback, and as a result, the commit frequency does not increase significantly, nor does the count of closed bugs and issues. We make the following suggestion: practitioners might be benefited by seeking feedback on submitted code changes from the CI process, by committing frequently.
Observed Benefits of CI and ‘Hero Projects’: Another possible explanation can be derived from the ‘hero’ concept observed in proprietary projects by Agrawal et al. (Agrawal et al., 2018). They identified projects, where one or few programmers work in silos and do 80% or more of the total programming effort, as ‘hero projects’. Agrawal et al. (Agrawal et al., 2018) reported the prevalence of hero projects amongst proprietary projects, which indicates that regardless of what tool/technique/methodology is being used, majority of the work will be conducted by a few programmers. In case of these projects, even if CI results in increased collaboration, the resolution of bug and issues will still be dependent on the programmers who are doing majority of the work i.e., ‘hero’ programmers. Based on our discussion, we suggest:for proprietary projects the benefits of adopting CI is dependent on what practices practitioners are following, for example, the practice of making frequent commits.
Changing Perceptions on CI Adoption: Practitioners often follow the ‘diffusion of innovation’ rule, which states that practitioners prefer to learn from other practitioners who have already adopted the tool of interest (Rahman et al., 2015) (Rogers, 2010). Our empirical study can be helpful for practitioners to re-visit their perceptions about CI adoption and use. For example, by reading a success story of CI adoption for an OSS project, a practitioner might be convinced that CI adoption is a good choice for his/her team. In such case, the practitioner’s perceptions can be checked and contrasted with empirical evidence. For CI adoption, learning from other practitioners can be a starting point, but practitioners also need to (i) consider their teams’ development context factors, and (ii) assess to what extent other practitioners’ experiences hold.
6. Threats to Validity
We acknowledge that our results can be influenced by other factors that we did not capture in our empirical study, for example, the prevalence of hero projects. Other limitations of our paper include:
: In any large scale empirical study where multiple factors are explored, some findings are susceptible to spurious correlations. To increase the odds that our findings do not suffer from such correlations, we have:
applied normalization on metrics that we used to answer our research questions.
applied two tests: the effect size test and the Mann-Whitney U test to perform statistically sound comparisons. For OSS projects, we compare and contrast our findings with prior research.
discussed our findings with industry practitioners working for our industrial partner. The practitioners agreed with the general direction of findings: they stated that many teams within their company use a wide range of tools and techniques which does not work optimally for all teams. The practitioners also agreed that there are significant differences between OSS and proprietary software development, and we should not assume these tools and techniques will yield similar benefits.
Generalizability: We acknowledge that the proprietary projects come from our industrial partner. Whether or not our findings are generalizable for other IT organizations remains an open question. We hope to address this limitation in future work.
: In our paper we have adopted a heuristic-driven approach to detect use of CI in a project. We acknowledge that our heuristic is limited tot he three CI tools, and we plan to improve our heuristics by exploring the possibility to add more CI tools.
Bug Resolution: We have relied on issues marked as a ‘bug’ to count bugs and bug resolution time. In Github, a bug might not be marked in an issue but in commits. We plan to investigate how bugs can inferred from commits, and update our findings accordingly.
After mining 150 OSS and 123 proprietary projects, we have quantified the influences of CI on software development for OSS and proprietary projects. We have observed that closed bugs, closed issues, and frequency of commits, significantly increase after adoption of CI for OSS projects, but not for proprietary projects. Our findings suggest that to reap the benefits of CI usage, practitioners should also apply the best practices of CI such as, making frequent commits. We also caution that it may be unwise to hype the usage of CI, promising that CI usage will always increase collaboration, along with bug and issue resolution. While our findings can be biased by our sample of projects, to the best of our knowledge, there exists no large scale research study that reports the opposite of our conclusions. At the very least, our results raise the issue of the benefits of CI tools for proprietary projects–an issue that, we hope, will be addressed by other researchers in future research studies.
- Agrawal et al. (2018) Amritanshu Agrawal, Akond Rahman, Rahul Krishna, Alexander Sobran, and Tim Menzies. 2018. We Don’T Need Another Hero?: The Impact of ”Heroes” on Software Development. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP ’18). ACM, New York, NY, USA, 245–253. https://doi.org/10.1145/3183519.3183549
- Beck (2000) Kent Beck. 2000. Extreme Programming Explained: Embrace Change. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.
- Beller et al. (2017) Moritz Beller, Georgios Gousios, and Andy Zaidman. 2017. Oops, My Tests Broke the Build: An Explorative Analysis of Travis CI with GitHub. In Proceedings of the 14th International Conference on Mining Software Repositories (MSR ’17). IEEE Press, Piscataway, NJ, USA, 356–367. https://doi.org/10.1109/MSR.2017.62
- Bhattacharya et al. (2012) Pamela Bhattacharya, Marios Iliofotou, Iulian Neamtiu, and Michalis Faloutsos. 2012. Graph-based Analysis and Prediction for Software Evolution. In Proceedings of the 34th International Conference on Software Engineering (ICSE ’12). IEEE Press, Piscataway, NJ, USA, 419–429. http://dl.acm.org/citation.cfm?id=2337223.2337273
- Bird et al. (2009) Christian Bird, Peter C Rigby, Earl T Barr, David J Hamilton, Daniel M German, and Prem Devanbu. 2009. The promises and perils of mining git. In Mining Software Repositories, 2009. MSR’09. 6th IEEE International Working Conference on. IEEE, 1–10.
- CI (2017) Travis CI. 2017. Travis CI. https://travis-ci.org/. (2017). [Online; accessed 15-October-2017].
- Cliff (1993) Norman Cliff. 1993. Dominance statistics: Ordinal analyses to answer ordinal questions. Psychological Bulletin 114, 3 (Nov. 1993), 494–509.
- Deshpande and Riehle (2008) Amit Deshpande and Dirk Riehle. 2008. Continuous Integration in Open Source Software Development. Springer US, Boston, MA, 273–280. https://doi.org/10.1007/978-0-387-09684-1_23
- Duvall et al. (2007) Paul Duvall, Stephen M. Matyas, and Andrew Glover. 2007. Continuous Integration: Improving Software Quality and Reducing Risk (The Addison-Wesley Signature Series). Addison-Wesley Professional.
- Github (2017) Github. 2017. Github Showcases. https://github.com/showcases. (2017). [Online; accessed 13-October-2017].
- Hilton et al. (2017) Michael Hilton, Nicholas Nelson, Timothy Tunnell, Darko Marinov, and Danny Dig. 2017. Trade-offs in Continuous Integration: Assurance, Security, and Flexibility. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). ACM, New York, NY, USA, 197–207. https://doi.org/10.1145/3106237.3106270
- Hilton et al. (2016) M. Hilton, T. Tunnell, K. Huang, D. Marinov, and D. Dig. 2016. Usage, costs, and benefits of continuous integration in open-source projects. In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE). 426–437.
- Kalliamvakou et al. (2014) Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M German, and Daniela Damian. 2014. The promises and perils of mining github. In Proceedings of the 11th working conference on mining software repositories. ACM, 92–101.
- Krishna et al. (2018) Rahul Krishna, Amritanshu Agrawal, Akond Rahman, Alexander Sobran, and Tim Menzies. 2018. What is the Connection Between Issues, Bugs, and Enhancements?: Lessons Learned from 800+ Software Projects. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP ’18). ACM, New York, NY, USA, 306–315. https://doi.org/10.1145/3183519.3183548
Mann and Whitney (1947)
H. B. Mann and D. R.
On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other.The Annals of Mathematical Statistics 18, 1 (1947), 50–60. http://www.jstor.org/stable/2236101
- Olsson et al. (2012) Helena Holmstrom Olsson, Hiva Alahyari, and Jan Bosch. 2012. Climbing the ”Stairway to Heaven” – A Mulitiple-Case Study Exploring Barriers in the Transition from Agile Development Towards Continuous Deployment of Software. In Proceedings of the 2012 38th Euromicro Conference on Software Engineering and Advanced Applications (SEAA ’12). IEEE Computer Society, Washington, DC, USA, 392–399. https://doi.org/10.1109/SEAA.2012.54
- Paulson et al. (2004) James W. Paulson, Giancarlo Succi, and Armin Eberlein. 2004. An Empirical Study of Open-Source and Closed-Source Software Products. IEEE Trans. Softw. Eng. 30, 4 (April 2004), 246–256. https://doi.org/10.1109/TSE.2004.1274044
- Rahman et al. (2017) Akond Rahman, Asif Partho, David Meder, and Laurie Williams. 2017. Which Factors Influence Practitioners’ Usage of Build Automation Tools?. In Proceedings of the 3rd International Workshop on Rapid Continuous Software Engineering (RCoSE ’17). IEEE Press, Piscataway, NJ, USA, 20–26. https://doi.org/10.1109/RCoSE.2017..8
- Rahman et al. (2015) Akond Ashfaque Ur Rahman, Eric Helms, Laurie Williams, and Chris Parnin. 2015. Synthesizing Continuous Deployment Practices Used in Software Development. In Proceedings of the 2015 Agile Conference (AGILE ’15). IEEE Computer Society, Washington, DC, USA, 1–10. https://doi.org/10.1109/Agile.2015.12
- Robinson and Francis (2010) Brian Robinson and Patrick Francis. 2010. Improving Industrial Adoption of Software Engineering Research: A Comparison of Open and Closed Source Software. In Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM ’10). ACM, New York, NY, USA, Article 21, 10 pages. https://doi.org/10.1145/1852786.1852814
- Rogers (2010) Everett M Rogers. 2010. Diffusion of innovations. Simon and Schuster.
Romano et al. (2006)
J. Romano, J.D. Kromrey,
J. Coraggio, and J. Skowronek.
Appropriate statistics for ordinal level data: Should we really be using t-test and Cohen’sd for evaluating group differences on the NSSE and other surveys?. Inannual meeting of the Florida Association of Institutional Research. 1–3.
- Sharp and Robinson (2008) Helen Sharp and Hugh Robinson. 2008. Collaboration and Co-ordination in Mature eXtreme Programming Teams. Int. J. Hum.-Comput. Stud. 66, 7 (July 2008), 506–518. https://doi.org/10.1016/j.ijhcs.2007.10.004
- Stahl et al. (2017) Daniel Stahl, Torvald Martensson, and Jan Bosch. 2017. The continuity of continuous integration: Correlations and consequences. Journal of Systems and Software 127, Supplement C (2017), 150 – 167. https://doi.org/10.1016/j.jss.2017.02.003
- Vasilescu et al. (2015) Bogdan Vasilescu, Yue Yu, Huaimin Wang, Premkumar Devanbu, and Vladimir Filkov. 2015. Quality and productivity outcomes relating to continuous integration in GitHub. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. ACM, 805–816.
- Zampetti et al. (2017) Fiorella Zampetti, Simone Scalabrino, Rocco Oliveto, Gerardo Canfora, and Massimiliano Di Penta. 2017. How Open Source Projects Use Static Code Analysis Tools in Continuous Integration Pipelines. In Proceedings of the 14th International Conference on Mining Software Repositories (MSR ’17). IEEE Press, Piscataway, NJ, USA, 334–344. https://doi.org/10.1109/MSR.2017.2
- Zhao et al. (2017) Yangyang Zhao, Alexander Serebrenik, Yuming Zhou, Vladimir Filkov, and Bogdan Vasilescu. 2017. The Impact of Continuous Integration on Other Software Development Practices: A Large-Scale Empirical Study. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering (ASE 2017). ACM, New York, NY, USA.
- Zolfagharinia et al. (2017) M. Zolfagharinia, B. Adams, and Y. G. Guehenuc. 2017. Do Not Trust Build Results at Face Value - An Empirical Study of 30 Million CPAN Builds. In 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR). 312–322. https://doi.org/10.1109/MSR.2017.7