When Less is More: On the Value of "Co-training" for Semi-Supervised Software Defect Predictors

11/10/2022
by   Suvodeep Majumder, et al.
0

Labeling a module defective or non-defective is an expensive task. Hence, there are often limits on how much-labeled data is available for training. Semi-supervised classifiers use far fewer labels for training models, but there are numerous semi-supervised methods, including self-labeling, co-training, maximal-margin, and graph-based methods, to name a few. Only a handful of these methods have been tested in SE for (e.g.) predicting defects and even that, those tests have been on just a handful of projects. This paper takes a wide range of 55 semi-supervised learners and applies these to over 714 projects. We find that semi-supervised "co-training methods" work significantly better than other approaches. However, co-training needs to be used with caution since the specific choice of co-training methods needs to be carefully selected based on a user's specific goals. Also, we warn that a commonly-used co-training method ("multi-view"– where different learners get different sets of columns) does not improve predictions (while adding too much to the run time costs 11 hours vs. 1.8 hours). Those cautions stated, we find using these "co-trainers," we can label just 2.5 those using 100 test if these reductions can be seen in other areas of software analytics. All the codes used and datasets analyzed during the current study are available in the https://GitHub.com/Suvodeep90/Semi_Supervised_Methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2023

Less, but Stronger: On the Value of Strong Heuristics in Semi-supervised Learning for Software Analytics

In many domains, there are many examples and far fewer labels for those ...
research
07/31/2022

Analysis of Semi-Supervised Methods for Facial Expression Recognition

Training deep neural networks for image recognition often requires large...
research
06/16/2020

Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance

Reaching the performance of fully supervised learning with unlabeled dat...
research
08/17/2022

Semi-supervised Learning with Deterministic Labeling and Large Margin Projection

The centrality and diversity of the labeled data are very influential to...
research
07/19/2022

SS-MFAR : Semi-supervised Multi-task Facial Affect Recognition

Automatic affect recognition has applications in many areas such as educ...
research
06/01/2021

Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners

Unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) ...
research
08/22/2021

FRUGAL: Unlocking SSL for Software Analytics

Standard software analytics often involves having a large amount of data...

Please sign up or login with your details

Forgot password? Click here to reset