PHANTOM: Curating GitHub for engineered software projects using time-series clustering

04/25/2019
by   Peter Pickerill, et al.
0

Context: Within the field of Mining Software Repositories, there are numerous methods employed to filter datasets in order to avoid analysing low-quality projects. Unfortunately, the existing filtering methods have not kept up with the growth of existing data sources, such as GitHub, and researchers often rely on quick and dirty techniques to curate datasets. Objective: The objective of this study is to develop a method capable of filtering large quantities of software projects in a time-efficient way. Method: This study follows the Design Science Research (DSR) methodology. The proposed method, PHANTOM, extracts five measures from Git logs. Each measure is transformed into a time-series, which is represented as a feature vector for clustering using the k-means algorithm. Results: Using the ground truth from a previous study, PHANTOM was shown to be able to rediscover the ground truth with up to 0.87 Precision or 0.94 Recall, and be able to identify "well-engineered" projects with up to 0.87 Precision and 0.94 Recall on the validation dataset. PHANTOM downloaded and processed the metadata of 1,786,601 GitHub repositories in 21.5 days, which is over 33% faster than a similar study, which used a computer cluster of 200 nodes. Conclusions: It is possible to use an unsupervised approach to identify well-engineering projects. PHANTOM was shown to be competitive compared to the existing supervised approaches while reducing the hardware requirements by two orders of magnitude.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset