Exploring ML testing in practice – Lessons learned from an interactive rapid review with Axis Communications

03/30/2022
by   Qunying Song, et al.
0

There is a growing interest in industry and academia in machine learning (ML) testing. We believe that industry and academia need to learn together to produce rigorous and relevant knowledge. In this study, we initiate a collaboration between stakeholders from one case company, one research institute, and one university. To establish a common view of the problem domain, we applied an interactive rapid review of the state of the art. Four researchers from Lund University and RISE Research Institutes and four practitioners from Axis Communications reviewed a set of 180 primary studies on ML testing. We developed a taxonomy for the communication around ML testing challenges and results and identified a list of 12 review questions relevant for Axis Communications. The three most important questions (data testing, metrics for assessment, and test generation) were mapped to the literature, and an in-depth analysis of the 35 primary studies matching the most important question (data testing) was made. A final set of the five best matches were analysed and we reflect on the criteria for applicability and relevance for the industry. The taxonomies are helpful for communication but not final. Furthermore, there was no perfect match to the case company's investigated review question (data testing). However, we extracted relevant approaches from the five studies on a conceptual level to support later context-specific improvements. We found the interactive rapid review approach useful for triggering and aligning communication between the different stakeholders.

READ FULL TEXT

page 6

page 8

page 9

research
06/25/2021

Test Case Selection and Prioritization Using Machine Learning: A Systematic Literature Review

Regression testing is an essential activity to assure that software code...
research
09/24/2019

A Systematic Literature Review of Test Breakage Prevention and Repair Techniques

Context: When an application evolves, some of the developed test cases b...
research
02/13/2023

A Systematic Literature Review of Explainable AI for Software Engineering

Context: In recent years, leveraging machine learning (ML) techniques ha...
research
06/21/2022

The Integration of Machine Learning into Automated Test Generation: A Systematic Literature Review

Context: Machine learning (ML) may enable effective automated test gener...
research
09/11/2020

Supervised learning for the prediction of firm dynamics

Thanks to the increasing availability of granular, yet high-dimensional,...
research
12/10/2021

The Industry Relevance of an IT Transition Programme

There is a shortage of qualified people in the IT industry in the world....
research
12/30/2018

A Systematic Literature Review of Automated Techniques for Functional GUI Testing of Mobile Applications

Context. Multiple automated techniques have been proposed and developed ...

Please sign up or login with your details

Forgot password? Click here to reset