A Comparison of Similarity Based Instance Selection Methods for Cross Project Defect Prediction

04/02/2021
by   Seyedrebvar Hosseini, et al.
0

Context: Previous studies have shown that training data instance selection based on nearest neighborhood (NN) information can lead to better performance in cross project defect prediction (CPDP) by reducing heterogeneity in training datasets. However, neighborhood calculation is computationally expensive and approximate methods such as Locality Sensitive Hashing (LSH) can be as effective as exact methods. Aim: We aim at comparing instance selection methods for CPDP, namely LSH, NN-filter, and Genetic Instance Selection (GIS). Method: We conduct experiments with five base learners, optimizing their hyper parameters, on 13 datasets from PROMISE repository in order to compare the performance of LSH with benchmark instance selection methods NN-Filter and GIS. Results: The statistical tests show six distinct groups for F-measure performance. The top two group contains only LSH and GIS benchmarks whereas the bottom two groups contain only NN-Filter variants. LSH and GIS favor recall more than precision. In fact, for precision performance only three significantly distinct groups are detected by the tests where the top group is comprised of NN-Filter variants only. Recall wise, 16 different groups are identified where the top three groups contain only LSH methods, four of the next six are GIS only and the bottom five contain only NN-Filter. Finally, NN-Filter benchmarks never outperform the LSH counterparts with the same base learner, tuned or non-tuned. Further, they never even belong to the same rank group, meaning that LSH is always significantly better than NN-Filter with the same learner and settings. Conclusions: The increase in performance and the decrease in computational overhead and runtime make LSH a promising approach. However, the performance of LSH is based on high recall and in environments where precision is considered more important NN-Filter should be considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2018

Dynamic Ensemble Selection VS K-NN: why and when Dynamic Selection obtains higher classification performance?

Multiple classifier systems focus on the combination of classifiers to o...
research
09/11/2019

Iterative versus Exhaustive Data Selection for Cross Project Defect Prediction: An Extended Replication Study

Context: The effectiveness of data selection approaches in improving the...
research
06/16/2022

An Empirical Study on the Effectiveness of Data Resampling Approaches for Cross-Project Software Defect Prediction

Crossp-roject defect prediction (CPDP), where data from different softwa...
research
06/24/2019

The NN-Stacking: Feature weighted linear stacking through neural networks

Stacking methods improve the prediction performance of regression models...
research
04/02/2020

Approximate Selection with Guarantees using Proxies

Due to the falling costs of data acquisition and storage, researchers an...
research
07/30/2013

Scalable k-NN graph construction

The k-NN graph has played a central role in increasingly popular data-dr...
research
03/14/2022

Multi Stage Screening: Enforcing Fairness and Maximizing Efficiency in a Pre-Existing Pipeline

Consider an actor making selection decisions using a series of classifie...

Please sign up or login with your details

Forgot password? Click here to reset