Mitigating ML Model Decay in Continuous Integration with Data Drift Detection: An Empirical Study

05/22/2023
by   Ali Kazemi Arani, et al.
0

Background: Machine Learning (ML) methods are being increasingly used for automating different activities, e.g., Test Case Prioritization (TCP), of Continuous Integration (CI). However, ML models need frequent retraining as a result of changes in the CI environment, more commonly known as data drift. Also, continuously retraining ML models consume a lot of time and effort. Hence, there is an urgent need of identifying and evaluating suitable approaches that can help in reducing the retraining efforts and time for ML models used for TCP in CI environments. Aims: This study aims to investigate the performance of using data drift detection techniques for automatically detecting the retraining points for ML models for TCP in CI environments without requiring detailed knowledge of the software projects. Method: We employed the Hellinger distance to identify changes in both the values and distribution of input data and leveraged these changes as retraining points for the ML model. We evaluated the efficacy of this method on multiple datasets and compared the APFDc and NAPFD evaluation metrics against models that were regularly retrained, with careful consideration of the statistical methods. Results: Our experimental evaluation of the Hellinger distance-based method demonstrated its efficacy and efficiency in detecting retraining points and reducing the associated costs. However, the performance of this method may vary depending on the dataset. Conclusions: Our findings suggest that data drift detection methods can assist in identifying retraining points for ML models in CI environments, while significantly reducing the required retraining time. These methods can be helpful for practitioners who lack specialized knowledge of software projects, enabling them to maintain ML model accuracy.

READ FULL TEXT
research
07/27/2022

Detecting Concept Drift in the Presence of Sparsity – A Case Study of Automated Change Risk Assessment System

Missing values, widely called as sparsity in literature, is a common cha...
research
05/06/2023

Detecting Concept Drift for the reliability prediction of Software Defects using Instance Interpretation

In the context of Just-In-Time Software Defect Prediction (JIT-SDP), Con...
research
12/13/2021

On The Reliability Of Machine Learning Applications In Manufacturing Environments

The increasing deployment of advanced digital technologies such as Inter...
research
11/10/2021

Automatically detecting data drift in machine learning classifiers

Classifiers and other statistics-based machine learning (ML) techniques ...
research
05/22/2023

Evaluating Model Performance in Medical Datasets Over Time

Machine learning (ML) models deployed in healthcare systems must face da...
research
09/09/2020

ODIN: Automated Drift Detection and Recovery in Video Analytics

Recent advances in computer vision have led to a resurgence of interest ...
research
05/16/2023

Towards Lifelong Learning for Software Analytics Models: Empirical Study on Brown Build and Risk Prediction

Nowadays, software analytics tools using machine learning (ML) models to...

Please sign up or login with your details

Forgot password? Click here to reset