On the Fault Proneness of SonarQube Technical Debt Violations: A comparison of eight Machine Learning Techniques

06/30/2019
by   Valentina Lenarduzzi, et al.
0

Background. The popularity of tools for analyzing Technical Debt, and particularly that of SonarQube, is increasing rapidly. SonarQube proposes a set of coding rules, which represent something wrong in the code that will soon be reflected in a fault or will increase maintenance effort. However, while the management of some companies is encouraging developers not to violate these rules in the first place and to produce code below a certain technical debt threshold, developers are skeptical of their importance. Objective. In order to understand which SonarQube violations are actually fault-prone and to analyze the accuracy of the fault-prediction model, we designed and conducted an empirical study on 21 well-known mature open-source projects. Method. We applied the SZZ algorithm to label the fault-inducing commits. We compared the classification power of eight Machine Learning models (Logistic Regression, Decision Tree, Random Forest, Extremely Randomized Trees, AdaBoost, Gradient Boosting, XGBoost) to obtain a set of violations that are correlated with fault-inducing commits. Finally, we calculated the percentage of violations introduced in the fault-inducing commit and removed in the fault-fixing commit, so as to reduce the risk of spurious correlations. Result. Among the 202 violations defined for Java by SonarQube, only 26 have a relatively low fault-proneness. Moreover, violations classified as "bugs" by SonarQube hardly never become a failure. Consequently, the accuracy of the fault-prediction power proposed by SonarQube is extremely low. Conclusion. The rules applied by SonarQube for calculating technical debt should be thoroughly investigated and their harmfulness needs to be further confirmed. Therefore, companies should carefully consider which rules they really need to apply, especially if their goal is to reduce fault-proneness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2021

Fault Prediction based on Software Metrics and SonarQube Rules. Machine or Deep Learning?

Background. Developers spend more time fixing bugs and refactoring the c...
research
08/25/2019

Does Code Quality Affect Pull Request Acceptance? An empirical study

Background. Pull requests are a common practice for contributing and rev...
research
08/02/2019

The Technical Debt Dataset

Technical Debt analysis is increasing in popularity as nowadays research...
research
05/03/2018

Poster: Identification of Methods with Low Fault Risk

Test resources are usually limited and therefore it is often not possibl...
research
04/05/2021

Predicting Crash Fault Residence via Simplified Deep Forest Based on A Reduced Feature Set

The software inevitably encounters the crash, which will take developers...
research
08/30/2019

Some SonarQube Issues have a Significant but SmallEffect on Faults and Changes. A large-scale empirical study

Context. Companies commonly invest effort to remove technical issues bel...
research
11/02/2018

Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk

Background. Test resources are usually limited and therefore it is often...

Please sign up or login with your details

Forgot password? Click here to reset