Lessons learned from replicating a study on information-retrieval based test case prioritization

by   Nasir Mehmood Minhas, et al.

Objective: In this study, we aim to replicate an artefact-based study on software testing to address the gap. We focus on (a) providing a step by step guide of the replication, reflecting on challenges when replicating artefact-based testing research, (b) Evaluating the replicated study concerning its validity and robustness of the findings. Method: We replicate a test case prioritization technique by Kwon et al. We replicated the original study using four programs, two from the original study and two new programs. The replication study was implemented using Python to support future replications. Results: Various general factors facilitating replications are identified, such as: (1) the importance of documentation; (2) the need of assistance from the original authors; (3) issues in the maintenance of open source repositories (e.g., concerning needed software dependencies); (4) availability of scripts. We also raised several observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. Conclusion: We conclude that the study by Kwon et al. is replicable for small and medium programs and could be automated to facilitate software practitioners, given the availability of required information.


page 1

page 2

page 3

page 4


What is the Vocabulary of Flaky Tests? An Extended Replication

Software systems have been continuously evolved and delivered with high ...

Overparametrization improves robustness against adversarial attacks: A replication study

Overparametrization has become a de facto standard in machine learning. ...

Increasing Validity Through Replication: An Illustrative TDD Case

Context: Software Engineering (SE) experiments suffer from threats to va...

A Replication Study on Predicting Metamorphic Relations at Unit Testing Level

Metamorphic Testing (MT) addresses the test oracle problem by examining ...

A Replication Study of Dense Passage Retriever

Text retrieval using learned dense representations has recently emerged ...

On the Relationship between Refactoring Actions and Bugs: A Differentiated Replication

Software refactoring aims at improving code quality while preserving the...

Iterative versus Exhaustive Data Selection for Cross Project Defect Prediction: An Extended Replication Study

Context: The effectiveness of data selection approaches in improving the...

1 Introduction

Replications help in evaluating the results, limitations, and validity of studies in different contexts [53]. They also help establishing or expanding the boundaries of a theory [10, 53].

During the previous four decades, software engineering researchers have built new knowledge and proposed new solutions. Many of these lack consolidation [35] replication studies can help in establishing the solutions and expanding the knowledge. Software engineering researchers have been working on replication studies since the 1990s. Still, the number of replicated studies is small, and a more neglected area is the replication of software testing experiments [9, 51, 10, 35]. Most software engineering replication studies are conducted for experiments involving human participants; few replications exist for artefact-based experiments [10].

In the artefacts-based software engineering experiments, the majority of the authors use the artefacts from the software infrastructure repository (SIR) [58]. Do et al.[13] introduced SIR in 2005 to facilitate experimentation and evaluation of testing techniques and to promote replication of experiments and aggregation of findings.

Researchers proposed different techniques to support regression testing practice, and there are various industrial evaluations of regression testing techniques. Adopting these techniques in practice is challenging because the results are inaccessible for the practitioners [5]. Replications of existing solutions on regression testing can be helpful in this regard, provided the availability of data and automation scripts of these replications.

Attempts have been made to replicate regression testing techniques. The majority of these replications are done by the same group of authors who originally proposed the techniques [15, 16, 14]. There is a need for conducting more independent replications in software engineering [13]. However, evidence of independent replications in regression testing is low [10].

Overall, we would highlight the following research gaps concerning replications:

  • Gap 1: Only a small portion of studies are replications: Among the reasons for a lower number of replications in software engineering is the lack of standardized concepts, terminologies, and guidelines [35]. Software engineering researchers need to make an effort to replicate more studies.

  • Gap 2: Lack of replication guidelines: There is a need to work on the guidelines and methodologies to support replicating the studies [12].

  • Gap 3: Lack of replications in specific subject areas: Software testing as a subject area has been highlighted as an area lacking replication studies [10]. According to Da Silva et al. [10] the majority of replication studies focuses on software construction and software requirements. Despite a well-researched area, the number of replication studies in software testing is at the lowest than other software engineering research areas according to Magalhães et al. [12].

  • Gap 4: Lack of studies on artefact-based investigations: Only a few replicated studies focused on artefact-based investigations [10]. That is, the majority of studies focused on experiments and case studies involving human subjects. Artefact-based replications are of a particular interest as they require to build and run scripts for data collection (e.g., solution implementation and logging), and at the same time compile and run the software systems, which are the subject of study.

Considering the gaps stated above, we formulate the following research goal:

Goal: To replicate an artefact-based study in the area of software testing, with a focus on reflecting on the replication process and the ability to replicate the findings of the original study.

To achieve our research goal, we present the results of our replication of an IR-based test case prioritization technique proposed by Kwon et al. [36]

. The authors introduced a linear regression model to prioritize the test cases targeting infrequently tested code. The inputs for the model are calculated using term frequency (TF), inverse document frequency (IDF), and code coverage information

[36]. TF and IDF are the weighing scheme used with information retrieval methods [48]. The original study’s authors used open-source data sets (including SIR artefacts) to evaluate the proposed technique. We attempted to evaluate the technique using four programs to see if the replication confirms the original study’s findings. We selected two programs from the original study and two new cases to test the technique’s applicability on different programs.

Our research goal is achieved through the following:

  1. Objective 1: Studying the extent to which the technique is replicable. Studying the extent to which the technique is replicable and documenting the detail of all steps will help draw valuable lessons. Hence, contributing with guidance for future artefact-based replications (Gap 2, Gap 4).

  2. Objective 2: Evaluating the results of the original study [36]. Evaluating the results through the replication provides an assessment of the validity and the robustness of the results of the original study. Overall, we contribute to the generally limited number of replication studies (Gap 1) in general, and replication studies focused on software testing in particular (Gap 3).

The organization of the rest of the paper is as follows: Section 2 provides a brief introduction to the concepts relevant to this study. Section 3 presents a brief discussion of some replications carried out for test case prioritization techniques. Along with the research questions and summary of the concepts used in the original study, Section 4 describes the methodologies we have used to select the original study and conduct the replication. Threats to the validity of the replication experiment are discussed in Section 4.6. Section 5 presents the findings of this study, Section 6 provides the discussion on the findings of replication study, and Section 7 concludes the study.

2 Background

This section provides a discussion on the topics related to our investigation.

2.1 Regression testing

Regression testing is a retest activity to ensure that system changes do not affect other parts of the system negatively and that the unchanged parts are still working as they did before a change [40, 58]. It is essential but expensive and challenging testing activity [21]. Various authors have highlighted that testing consumes 50% of the project cost, and regression testing consumes 80% of the total testing cost [33, 20, 21, 25]. Research reports that regression testing may consume more than 33% of the cumulative software cost [34]. Regression testing aims to validate that modifications have not affected the previously working code [14, 40].

Systems and Software Engineering–Vocabulary [28], defines regression testing as:

1. “Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements.”
2. “Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects.”

For larger systems, it is expensive to execute regression test suites in full [40]. To cope with this, one of the suggested solutions is test case prioritization. It helps to prioritize and run the critical test cases early in the regression testing process. The goal of test case prioritization is to increase the test suite’s rate of fault detection [19].

A reasonable number of systematic literature reviews and mapping studies on various aspects of regression testing provides evidence that regression testing is a well-researched area [49, 23, 20, 59, 33, 8, 58, 45, 54, 7, 5, 34, 3, 37, 11]. Despite a large number of regression testing techniques proposed in the literature, the adoption of these techniques in the industry is low [47, 46, 21, 18]. The reasons are that the results of these techniques are not accessible for practitioners due to the discrepancies in terminology between industry and academia [5, 21, 39]. There is a lack of mechanisms to guide the practitioners in translating, analyzing, and comparing the regression testing techniques. Furthermore, various authors use controlled experiments for their empirical investigations, and in most cases, it is hard to assess that these experiments are repeatable and could fit in an industrial setting [5]. Replication of empirical studies could lead us to the desired solution, as it can help to confirm the validity and adaptability of these experiments [53].

2.2 Replication

Replication is a means to validate experimental results and examine if the results are reproducible. It can also help to see if the results were produced by chance or the results are the outcome of any feigned act [30]. An effectively conducted replication study helps in solidifying and extending knowledge. In principle, replication provides a way forward to create, evolve, break, and replace theoretical paradigms [35, 53]. Replication could be of two types 1) internal replication –a replication study carried out by the authors of the original study themselves, 2) external replication –a replication study carried out by researchers other than the authors of the original study [52, 35].

In software engineering research, the number of internal replications is much higher than external replications [4, 10]. Da Silva et al. [10] reported in their mapping study that out of 133 included replication studies, 55% of the studies are internal replications, 30% are external replications, and 15% are the mix of internal and external. Furthermore, the results of 82% of the internal replications are confirmatory, and the results of 26% of external replications conform to the original studies [10]. From the empirical software engineering perspective, Shull et al. [53]classify replications as exact and conceptual replication. In an exact replication, the replicators closely follow the procedures of the original experiment, whereas, in a conceptual replication, the research questions of the original study are evaluated using a different experimental setup. Concerning exact replication, if the replicators keep the conditions in the replication experiment the same as the actual experiment, it would be categorized as exact dependent replication. If replicators deliberately change the underlying conditions of the original experiment, it would be referred to as exact independent replication. Exact dependent and exact independent replications could respectively be mapped to strict and differentiated replications. A strict replication compels the researchers to replicate a prior study as precisely as possible. In contrast, in a differentiated replication, researchers could intentionally alter the aspects of a previous study to test the limits of the study’s conclusions. In most cases, strict replication is used for both internal and external replications [35].

2.3 Information Retrieval

IR-based techniques are used to retrieve the user’s information needs from an unstructured document collection. The information needs are represented as queries [57, 22]. An information retrieval (IR) system is categorized by its retrieval model because its effectiveness and utility are based on the underlying retrieval model [1]. Therefore, a retrieval model is the core component of any IR system.

Amati [1] defines the information retrieval model as:

“A model of information retrieval (IR) selects and ranks the relevant documents with respect to a user’s query. The texts of the documents and the queries are represented in the same way, so that document selection and ranking can be formalized by a matching function that returns a retrieval status value (RSV) for each document in the collection. Most of the IR systems represent document contents by a set of descriptors, called terms, belonging to a vocabulary V.”

Some of the retrieval models are the vector space model (VSM), probabilistic relevance framework (PRF), binary independence retrieval (BIR), best match version 25 (BM 25), and language modeling (LM). VSM is among the popular models in information retrieval systems. It uses TF-IDF (term frequency and inverse document frequency) as a weighing scheme


Since the technique [36] we are replicating in this study uses the concepts of TF-IDF weighing scheme, we briefly present TF and IDF.

Term frequency (TF) and inverse document frequency (IDF) are statistics that indicate the significance of each word in the document or query. TF represents how many times a word appears in the document or query. IDF is an inverse of document frequency (DF). The DF of a word indicates the number of documents in the collection containing the word. Therefore a high IDF score of any word means that the word is relatively unique and it appeared in fewer documents [22].

3 Related Work

Most of the replication studies on test case prioritization were conducted by the same group of authors, who primarily re-validated/extended the results of their previously conducted experiments (see [15, 16, 14]). Below we discuss studies that are closely related to our topic (i.e., test case prioritization).

Do et al. [15] conducted a replication study to test the effectiveness of the test case prioritization techniques originally proposed for C programs on different Java programs using the JUnit testing framework. The authors’ objective was to test whether the techniques proposed for C programs could be generalized to other programming and testing paradigms. The authors who conducted the replication study were part of the original studies, so by definition, it could be referred to as an internal replication. However, concerning the implementation perspective, the replication study would be regarded as differentiated replication.

Do and Rothermel [16] conducted an internal replication study to replicate one of their studies on test case prioritization. The original study used hand-seeded faults. In the replication study, the authors conducted two experiments. In the first experiment, the authors considered mutation faults. The goal was to assess whether prioritization results obtained from hand-seeded faults differ from the results obtained by mutation faults. The authors used the same programs and versions used in the original study. They also replicated the experimental design according to the original study. To further strengthen the findings, later in the second experiment, the authors replicated the first experiment with two additional Java programs with different types of test suites.

Ouriques et al. [41] conducted an internal replication study of their own experiment concerning the test case prioritization techniques. In the original study, the authors experimented with programs closer to the industrial context. The objective of the replication study was to repeat the conditions evaluated in the original study but with more techniques and industrial systems as objects of study. Although the authors worked with the test case prioritization techniques, they clearly stated that the methods examined in their research use a straightforward operation of adding one test case at a time in the prioritized set. They do not use any data from the test case execution history, and hence, regression test prioritization is not in the scope of their study.

Hasnain et al. [26]

conducted a replication study to investigate the regression analysis for classification in test case prioritization. The authors’ objective to replicate the original study was to confirm whether or not the regression model used in the original study accurately produced the same results as the replicated study. Along with the program and data set used in the original study, the authors also used an additional open-source Java-based program to extend the original study’s findings. It is an external replication study as all authors of the replication study are different from that of the original study. The authors of the replicated study validated the results of the original study on an additional dataset other than the one used in the original study, the replication is not strict.

In the above discussion of related studies, we learned that most replication studies conducted for test case prioritization are primarily internal replications. We could only find a single external replication study [26]. The authors of this study conducted the replication of a classification-based test case prioritization using regression analysis. Our study is similar to this study based on the following factors, 1) our study is an external replication, 2) we also use two software artefacts from the original study and two additional artefact. In many ways, our study is unique; for example, 1) we are replicating a technique that focuses on less tested code, whereas Husnain et al. replicated a technique that is based on fault classification and non-faulty modules, 2) we have provided a step by step guide to support future replications, and 3) we provide automated scripts to execute the complete replication study.

4 Methodology

For reporting the replication steps, we followed the guideline proposal provided by [6]. It suggests reporting the following for a replication study:

  1. Information about the original study (Section 4.2)

  2. Information about the replication (Section 4.3.3)

  3. Comparison of results to the original study (Section 5.2)

  4. Drawing conclusions across studies (Section 7)

4.1 Research questions

In the presence of the constraint regarding experimental setup and data, we have to rely on the information presented in the original study (see Section (4.2). We decided not to tweak the original study’s approach and followed the steps proposed by the authors and executed the technique on one of the artefacts used by the authors. The differential aspects of the replication experiment are the mutants and the automation of the major steps of the technique. According to the classification provided by Shull et al. [53], our work can be classified as exact independent replication of the test case prioritization technique presented in [36].

To achieve the objectives of the study we asked the following two research questions:

To what degree is the study replication feasible given the information provided? To what degree is the study replicable with the programs used by the original authors? What is the possibility to replicate the study with the new programs? The answer to RQ1 corresponds to Objective 1. While answering RQ1, the motive was to see the possibility to replicate the technique presented in the original study using different programs. Does the replication confirm the findings of the original study? The answer to RQ2 corresponds to Objective 2. The motive of RQ2 was to see if the replication results conform to the finding of the original study. To ensure that there should be no conscious deviation from the basic technique, we followed the steps and used the tools mentioned in the original study. Finally, we evaluated the replication results using the average percentage of fault detection (APFD) as suggested by the original study’s authors.

4.2 Information about the original study

4.2.1 Selection of target

Selection of a target study for replication is a difficult process, and often it is prone to biases due to various reasons [44]. For example, clinical psychology research reports that authors tend to choose targets that are easy to set up and execute [44]. The selection of target must be purpose-based, either by following systematic criteria (see, e.g., [44]) or other justifiable reasons. In our case, the selection of the target is based on the needs identified from our interaction with industry partners [39, 40, 5] and reported facts in the related literature [58, 54].

For the selection of the target study, our first constraint was test case prioritization, whereas the underlying criteria were to search for a technique that can help control fault slippage and increase the fault detection rate. During our investigations [40], we identified that test case prioritization is among the primary challenges for practitioners, and they are interested in finding techniques that can overcome their challenges and help them follow their goals (see also [39]). Increasing the test suite’s rate of fault detection is a common goal of regression test prioritization techniques [42, 37], whereas controlling fault slip through is among the goals of the practitioners [39, 40].

Our next constraint was selecting a study where authors used the SIR system to evaluate their technique(s). Singh et al. [54] reported that out of 65 papers selected for their systematic reviews on regression test prioritization 50% are using SIR systems. Yooo et al. [58] also reported that most of the authors evaluate their techniques using SIR artefacts. They highlight that use of SIR systems allows replication studies.

The final constraint was to select a target study that uses IR methods for the prioritization technique. Recent studies report that test case prioritization techniques based on IR concepts could perform better than the traditional coverage-based regression test prioritization techniques [43, 50].

We searched Google Scholar with the keywords “regression testing”, “test case prioritization”, “information retrieval (IR)”, “software infrastructure repository (SIR)”. Our searches returned 340 papers. After scanning the abstracts, we learned that there is not a single technique that explicitly states controlling fault slippage as its goal. However, the technique presented in [36] focused on less tested code, and the goal was to increase the fault detection rate of coverage-based techniques using IR methods. Ignored or less tested code could be among the causes of fault slippage. Therefore we considered the technique by Kwon et al. [36] for further evaluation. We evaluated this technique using the rigor criteria as suggested by Ivarsson and Gorschek [29]. The authors suggest evaluating the rigor of empirical studies based on context, design, and validity threats.

After considering all the factors mentioned above and applying the rigor rubrics, the study presented in [36] was used as a target for replication.

4.2.2 Describing the original study

Kwon et al. [36] intended to improve the effectiveness of test case prioritization by focusing on infrequently tested code. They argued that test case prioritization techniques based on code coverage might lack fault detection capability. They suggested that using the IR-based technique could help overcome the limitation of coverage-based test case prioritization techniques. Considering the frequency at which code elements have been tested, the technique uses a linear regression model to determine the fault detection capability of the test cases. Similarity score and code coverage information of test cases are used as input for the linear regression model. Kwon et al. [36] stated that the technique they proposed is the first of its type that considers less tested code and uses TF-IDF in IR for test case prioritization. The authors claimed that their approach is also first in using linear regression to weigh the significance of each feature regarding fault-finding. They divided the process into three phases, i.e., validation, training, and testing, and suggested using the previous fault detection history or mutation faults as validation and training data.

Kwon et al. [36] suggested the following steps to implement the proposed technique: Coverage of each test case Set IDF threshold with validation data (previous or mutation faults) Calculate TF/IDF scores of each test case Use coverage and sum of TF/IDF of a test case as predictor values in the training data Use previous (mutation) faults as response values in the training data Estimate the regression coefficients (weight of each feature) with the training data Assign predictor values (coverage and TF/IDF scores) to the model to decide the test schedule Run the scheduled test cases

To evaluate the proposed test case prioritization technique (IRCOV), [36] used four open-source Java programs XML-Security (XSE), Commons-CLI (CCL), Commons-Collections (CCN), and Joda-Time (JOD). [36] highlighted that the fault information of the programs was not sufficiently available, and they were unable to evaluate their approach using available information. Therefore, the authors simulated the faults using mutation. To generate the mutants, they used the mutation framework MAJOR [31, 32]. To reduce the researcher’s bias and achieve reliable results, they applied ten fold validation and divided the mutation faults into ten subsets and assigned each subset to training, validation, and test data.

4.2.3 Concepts used in the original study

The original study [36] makes use of IR concepts. It views a “document” as a test case, “words” as elements covered (e.g., branches, lines, and methods), and “query” as coverage elements in the updated files. TF and IDF scores of the covered elements determine their significance to a test case. The number of times a test case exercises a code element is counted as a TF value. The document frequency (DF) represents the number of test cases exercising an element. IDF is used to find the unique code elements as it is the inverse of DF.

Since the focus of the proposed technique was on less-tested code, the IDF score has more significance, and it is required to minimize the impact of TF. To minimize the impact of TF score on the test case prioritization, they used Boolean values for TF (i.e., if a test case covers the code element, otherwise). To assign an IDF score to a code element the IDF threshold is used. [36] define the IDF threshold as:

“The maximum number of test cases considered when assigning an IDF score to a code element.”

The IDF threshold is decided by the validation data that consists of faults and related test cases from the previous test execution history or mutation faults.

Finally, the authors used the similarity score between a test case (document) and the changed element (query) to indicate the test cases related to modifications. The similarity score is measured using the sum of TF-IDF scores of common elements in the query.

4.2.4 Key findings of the original study

Using four open-source Java programs, the authors compared their technique with random ordering and standard code-coverage-based methods (i.e., line, branch, and method coverage). They measured the effectiveness using Average Parentage of Fault Detection (APFD).

The authors concluded that their technique is more effective as it increased the fault detection rate by 4.7% compared to random ordering and traditional code coverage-based approaches.

4.3 Information about the replication

We first present contextual information, i.e. data availability (Section 4.3.1) and division of the roles during the replication (Section 4.3.2). Thereafter, we describe how the replication steps were implemented (Section 4.3.3).

4.3.1 Authors’ consent and Data availability

We contacted the original authors to get their consent and ask for any help to replicate their work. We asked them if they can share their experimental package and data with us. We received a reply from one of the corresponding authors and were informed that they do not have any backups related to this study since they conducted this study a few years ago. However, they do not have any objection to the replication of their work.

4.3.2 Roles involved

All four authors of this study were given a specified role in the replication. The first and second authors jointly selected the candidate study. The first author conceptualized the whole replication process, including the logical assessment of the study to be replicated. The second author who is an industry practitioner set up the environment according to the requirements of the programs. Both the first and second authors jointly performed the replication steps. The first author provided his input for every step, while the second author carried out the actual implementation. The third and fourth authors reviewed study design and implementation steps.

4.3.3 Replication steps

We aimed to make an exact replication of the original study, and therefore we followed the procedure strictly as presented in the original study [36]. The original study IRCOV was built using total and additional line, branch, and method coverage. However, [36] stated that the results of IRCOV with total and additional coverage were similar. Therefore we only used the total coverage to built the IRCOV models. The sequence of events followed in the replication experiment are shown in Figure 1.

Figure 1: Steps followed to replicate the original study

Replication objects: We built IRCOV models based on three coverage approaches (i.e., Line, Branch, and Method coverage). We aimed to build the IRCOV model using four programs, two from the original study and two new programs. Table 1 presents the details of the programs used in the replication of IRCOV. The programs are Common CLI, XML security, Commons email, and Log4j. We were able to implement IRCOV with Commons CLI, but due to various constraints discussed in Section 5, we failed to replicate IRCOV with XML security, Commons email, and Log4j.

Program Version LOC Test Classes Used in [36] Repository
Commons CLI 1.1, 1.2 13210 23 Yes SIR & GitHub
XML Security 2.2.3 21315 172 Yes SIR & GitHub
Commons Email master 83154 20 No GitHub
Log4j master 169646 63 No SIR & GitHub
Table 1: Programs used in replication

We selected Commons CLI and XML security as these were used in the original study. Commons CLI111https://commons.apache.org/proper/commons-cli/ is a library providing an API parsing of command-line arguments. XML-security222http://santuario.apache.org/javaindex.html for Java is a component library implementing XML signature and encryption standards. To see if the technique (IRCOV) is replicable with other programs, we selected Commons Email and Log4J. Commons Email333https://commons.apache.org/proper/commons-email/ is built on top of the JavaMail API, and it aims to provide an API for sending email.

Log4j444https://logging.apache.org/log4j/2.x/ is a Java based logging utility. Log4j 2 was released in 2014 to overcome the limitations of its predecessor version Log4j 1. We obtained the programs from GitHub and used the test suites provided with the programs.

Mutant generation: The fault information of the programs was not available, and therefore we used mutation faults instead–the authors of the original study used mutation faults. For the mutation, we used the tool (MAJOR) [31, 32].

Partitioning mutants into training, validation, and test sets: As per the description in the original study, we classified the mutants into training, validation, and test sets (10%, 10%, and 80%, respectively). To classify the data, we used an online random generator555https://approsto.com/random-line-picker/. We applied the ten-fold validation technique to ensure the reliability of the results and avoid any bias. To create ten folds of each data set (i.e., training, validation, and test sets), we wrote automation scripts [27].

IDF threshold: The purpose of setting up an IDF threshold is to ensure that prioritized test cases should detect faults in less tested code elements. The IDF threshold is decided using validation data containing information of faults and of test cases detecting the faults. To calculate the IDF threshold the authors of the original study [36] suggested using a ratio from 0.1 to 1.0 in Equation 1.


We trained the regression model with each threshold using validation data and selected the ratio that led to the minimum training error for the IDF threshold. Based on the minimum training error, Table 2 presents the chosen values for the IDF threshold of all ten folds of Commons CLI. We assigned IDF values to only those code elements whose DF was not above the IDF threshold.

Calculating TF and IDF score: As suggested in the original study [36], we use Boolean values for TF (i.e., if the test case covers the element, otherwise). The purpose to fix the TF values as 0 or 1 was to ensure that only test case would be prioritized that are targeting less tested code. The IDF score is more significant in this regard. As suggested in the original study [36], we used Equation 2 to calculate the IDF score.


Similarity score: The similarity score directly contributes to the IRCOV model. In the regression model (see Equation 4), refers to the similarity score of each test case. We have calculated the similarity scores using Equation 3 as suggested in [36].


Since TF values are 1 or 0 (i.e., if a test case excises a code element, then TF is 1; otherwise, it is 0), practically similarity scores are the sum of IDF scores of the elements covered by a particular test case.

Coverage information: The coverage measure is aslo used in the regression model. In Equation 4, refers to the coverage size of each test case. To measure code size (line of code) and coverage of each test case, we used JaCoCo666https://www.eclemma.org/jacoco/.

IRCOV model: We used Equation 4 for the linear regression model as suggested in the original study [36].


In Equation 4, is the size of the coverage data for each test case, and refers to the similarity score of each test case. The value of y represents each test case’s fault detection capability, which is proportional to the number of previous faults detected by the test case. In the regression model, three coefficients need to be calculated (i.e., 0, 1, & 2). Here 0 represents the intersect, whereas, to calculate 1 and 2 [36] suggested using Equation 5, which uses value and respective values of and . Here could be calculated using Equation 6, where as and respectively represent the size of coverage and similarity scores of each test case.


Prioritization based on fault detection capability: After having the values of coefficients and variables of regression model (i.e., 0, 1, 2, , and ), we determined the fault detection capability of each test case using the IRCOV model (see Equation 4). Finally, we arranged the test cases in the descending order of the calculated fault detection capability.

Evaluating the technique: After having a prioritized set of test cases, we ran them on the 50 faulty versions of each fold we created using test data set of mutants. To evaluate the results, we used the average percentage of fault detection (APFD) (see Equation 7).


4.4 Analysis of the replication results

We implemented IRCOV for line, branch, and method coverage. As described above, for all coverage types, we calculated the APFD values for each fold, we also captured the intermediate results (see Table 2).

To compare the replication and the original study results, we translated the APFD values for Commons CLI from the original study. Then we plotted the APFD values of the original and replication study in the box plot, a statistical tool to visually summarize and compare the results of two or more groups [55, 14]. Box plot of APFD values enabled us to visually compare the replication and original study results.

To triangulate our conclusions, we applied hypothesis testing. We used Wilcoxon signed-rank test to compare the results of IRCOV original and IRCOV replication results. Also in the original study [36] used Wilcoxon signed-rank test to compare the IRCOV results with the baseline methods. Wilcoxon signed-rank test is suitable for paired samples where data is the outcome of before and after treatment. It measures the difference between the median values of paired samples [24]. In our case, we were interested in measuring the difference between the median APFD values of IRCOV original and IRCOV replication. Therefore, the appropriate choice to test our results was Wilcoxon singed-rank test.

We tested the following hypothesis:

There is no significant difference in the median APFD values of original and replication study using line coverage. There is no significant difference in the median APFD values of original and replication study using branch coverage. There is no significant difference in the median APFD values of original and replication study using method coverage.

4.5 Automation of Replication

Figure 2: Steps to automate the replication of IRCOV

The replication was implemented using Python scripts. They are available [27]. Figure 2 presents the details of automation steps for the replication of IRCOV. The original study’s authors proposed that ten-fold-based execution is needed (when historical data is not available) to evaluate their original technique. Therefore, our implementation (fold_generator) [27] generates ten folds of the object program at the first stage. Thereafter, it generates fifty faulty versions of each fold, whereas each version contains 5-15 mutants (faults). After generating the faulty versions, the script makes the corresponding changes in the code. Finally, the tests are executed, and their results are extracted. Later, using the test results, we calculate the APFD values of each fold. The calculation of APFD values is the only step not handled in our script. We used excel sheets to calculate APFD values.

4.6 Threats to validity

4.6.1 Internal validity

Internal validity refers to the analysis of causal relations of independent and dependent variables. In our case, we have to see if the different conditions affect the performance of IRCOV. IRCOV depends upon two inputs, coverage of each test case and a similarity score calculated based on TF-IDF. We used the test cases available within the programs. Therefore, we do not have any control over the coverage of these test cases. However, the choices of mutants can impact the similarity score. To avoid any bias, we generated the mutants using a tool and used a random generator to select the mutants for different faulty versions of the programs. Furthermore, we trained IRCOV sufficiently before applying it to test data by following the tenfold validation rule. Since we measured the performance of IRCOV using the APFD measure, the results of the successful case were not significantly different from the original study’s results. Therefore we can argue that our treatment did not affect the outcome of IRCOV. Hence minimized the threats to the internal validity.

4.6.2 Construct validity

Construct validity is concerned with the underlying operational measures of the study. In our case, since it is a replication study and we followed the philosophy of exact replication [53]. Therefore, if the original study suffers of any aspects of construct validity, the replication may do so. For instance, the use of mutation faults could be a potential threat to the construct validity because of the following two reasons

  • Mutation faults may not be representative of real faults.

  • Possible researchers’ bias concerning the nature of mutation faults.

Concerning the first reason, the use of mutation faults to replace the real faults is an established practice and researchers claim that mutation faults produce reliable results and hence can replace the real faults [16, 2]. To avoid any bias, we used an automated mutation tool to generate the mutants. Also to select the mutants for validation, training, and test set we used an automated random selector. Hence no human intervention was made during the whole process. Furthermore, we discussed the strengths and weaknesses of different tools.

4.6.3 External validity

External validity is the ability to ”generalize the results of an experiment to industrial practice” [56]. The programs used in the replication study are small and medium-sized Java programs. Therefore, we can not claim the generalizability of results to large-scale industrial projects. The results produced in replication align well with the results of the original study. However, we could not demonstrate the use of the technique on the new programs.

5 Results

This section presents the findings from the replication. The results are organized according to research questions listed in Section 4.

5.1 RQ1. Degree to which the replication is feasible to implement.

The first goal was to see if it is possible to replicate the IRCOV technique described in the study [36].

Out of the four replication attempts, we successfully replicated the IRCOV technique with the Commons CLI project. However, with the other three projects (i) XML security, (ii) Commons email, (iii) Log 4j, the replication was either partially successful or unsuccessful due to the reasons elaborated in the following.

Successful replication implementation: We successfully replicated IRCOV with Commons CLI. After going through the steps presented in Section 4.3.3, for every fold, we were able to calculate the respective coverage information and similarity score of each test case. Table 2 presents the intermediate results for the replication of IRCOV with Commons CLI. These include, training error, chosen value of IDF threshold, regression coefficient 0, coverage weight 1, and wight for similarity score 2).

Fold Name Coverage Type Training Error IDF Threshold 0 1 2
Fold1 MC 1.0694 7 -0.3478 0.0187 0.1426
LC 0.9770 2 0 0 0
BC 0.8876 2 0 0 0
Fold2 MC 0.3195 5 -0.7976 0.0323 -0.1472
LC 0.3533 6 -0.6084 0.0088 -0.1343
BC 0.3567 5 -0.3386 0.0178 -0.2095
Fold3 MC 0.6411 6 -0.0286 0.0008 0.0796
LC 0.6404 6 -0.0498 0.0004 0.0736
BC 0.6405 6 -0.0380 0.0008 0.0736
Fold4 MC 0.4783 6 -0.0687 0.0097 0.1677
LC 0.4551 6 -0.1086 0.0032 0.1365
BC 0.4947 6 0.1240 0.0045 0.1683
Fold5 MC 0.1838 5 0.0309 0.0068 0.0558
LC 0.1856 4 0.0859 0.0018 0.0612
BC 0.1876 4 0.1406 0.0038 0.0516
Fold6 MC 0.2247 2 -0.5284 0.0194 0.3548
LC 0.1795 2 -0.4869 0.0052 0.3470
BC 0.1549 2 -0.3978 0.0119 0.3149
Fold7 MC 0.1382 10 -0.1479 0.0115 -0.0141
LC 0.1364 10 -0.0833 0.0030 -0.0234
BC 0.1390 10 0.0028 0.0065 -0.0235
Fold8 MC 0.2020 6 0.4389 -0.0024 0.0839
LC 0.2046 6 0.3401 -0.0001 0.0715
BC 0.2046 6 0.3286 -0.00001 0.0694
Fold9 MC 0.1490 6 0.1652 -0.0032 0.1473
LC 0.1532 6 0.0540 -0.0002 0.1344
BC 0.1517 6 0.0862 -0.0012 0.1434
Fold10 MC 0.0339 10 -0.1253 0.0017 0.0267
LC 0.0339 10 -0.1127 0.0004 0.0261
BC 0.0343 10 -0.0920 0.0007 0.0278
Table 2: Simulation parameters for Commons CLI. (MC = Method coverage, LC = Line coverage, & BC = Branch coverage)

To evaluate the performance of IRCOV, we have calculated APFD values for all ten folds of each coverage type (branch, line, and method) (see Table 3). For branch coverage, the APFD value ranges from 0.547 to 0.873, whereas the average (median) APFD value for branch coverage is 0.747. The APFD values for line coverage range from 0.609 to 0.873, and the average APFD value for line coverage is 0.809. Finally, the APFD value for method coverage ranges from 0.549 to 0.864, and the average APFD for method coverage is 0.772. These results show that the IRCOV model performed best for the line coverage as the mean APFD for line coverage is highest among the coverages.

Partial or unsuccessful replication: Our first unsuccessful replication was concerning XML security. We did not find all the program versions used in the original study (Study [36]). Therefore, we decided to use the versions that have slightly similar major/minor release versions. We downloaded available XML security versions 1, 1.5 and 2.2.3. The first two downloaded versions (version 1 and version 1.5) were not compiling due to the unavailability of various dependencies. The logs from the compilation failures are placed in folder “LogsXmlSecurit” available at [38].

We were able compile the third XML security version 2.2.3, but we could not continue with it, because this version contained several failing test cases (see [38]). With already failing test cases it was difficult to train the model correctly and get the appropriate list of prioritized test cases.

The second unsuccessful replication attempt was executed on Commons email. This time the replication was unsuccessful because of faulty mutants generated by the mutant software. For instance, it suggested replacing variable names with ’null’ (see Listing 1 & 2). The actual code was while after mutant injection, the code turned to .

ByteArrayDataSource@setName(java.lang.String):214:name |==> null
Listing 1: Faulty mutant generated by the tool
 public void setName(final String name)
                 // this.name = null;  Original code
        this.null =  null;  //Substituted by mutant generator }
Listing 2: Code generated after the insertion of faulty mutant

Another type of faulty instances were when MAJOR suggested to modify a line in the code that resulted in Java compilation errors (such as "unreachable statement"). There were several such faulty mutants that made the program fail to compile, and hence no further processing was possible. The detail of all faulty mutants is available in the folder “CommonsEmail” at [38].

We also made unsuccessful attempts to change the mutant generator to rectify this problem. However, each mutant generator presented a new set of problems. The lessons learnt from usage of different mutant generators are described in next section.

The third replication attempt was executed on the program Log4j. We followed all the steps (using automatic scripts) proposed by the authors of the original study. We successfully generated the mutants for this program. However, the replication was stopped at the point when the steps to train the model failed. The proposed approach in the original study is based on the coverage information of each code class and test-class. This time the issue was caused by the low coverage of the test cases. During the training of the model, we realized that because of low coverage of the test cases, we were unable to calculate the values of regression coefficients, and as a result, we could not generate the prioritized set of test cases. We developed a Jupyter notebook to describe each steps of this partially successful replication (see [27]). Compared to the other programs selected in this study, with 169646 LOC, Log4J is a large program. Thus, a lot of time was needed to train the model for Log4J. For all ten folds, with fifty faulty versions of each fold and with five to fifteen faults in each faulty version, it required approximately 60 hours to train the model.

Key findings: Concerning RQ1, the replication was only feasible in one of four cases; the key reasons are listed below. The inability to use the system under test was caused by compatibility issues (unavailability of system versions and dependencies). Already failing test cases made the replication fail. Mutant generators created issues in running the replication, workarounds were difficult to implement. Test cases require a certain level of coverage to train the model. More effort is required to train the model for large-sized programs.

Folds Branch Coverage Line Coverage Method Coverage
Fold 1 0.874 0.874 0.865
Fold 2 0.816 0.866 0.790
Fold 3 0.646 0.643 0.613
Fold 4 0.757 0.816 0.755
Fold 5 0.725 0.715 0.721
Fold 6 0.796 0.829 0.829
Fold 7 0.841 0.839 0.839
Fold 8 0.610 0.610 0.585
Fold 9 0.548 0.622 0.594
Fold 10 0.736 0.803 0.803
Table 3: APFD values for all ten folds of each coverage type

5.2 RQ2. Comparison of the results to the original study.

Figure 3 presents the APFD boxplots of the original and replication study for Commons CLI. Boxplots with blue patterns represent the original study results, and boxplots with gray patterns represent the replication study results. We can see that in all cases, the APFD values of the original study are slightly better compared to the values of the replication. We applied statistical tests to detect whether the results of the replication and the original study differ.

Figure 3: APFD Boxplots for IRCOV Original vs IRCOV Replication

IRCBO= IRCOV Branch coverage original, IRCBR= IRCOV Branch Coverage Replication
IRCLO= IRCOV Line coverage original, IRCLR= IRCOV Line coverage replication
IRCMO= IRCOV Method coverage original, IRCMR=IRCOV Method coverage replication

To compare the replication results for branch, line, and method coverage of Commons CLI with the original study’s results, we applied Wilcoxon singed-rank test. The results are significant if the p-value is less than the level of significance [17]. In our case, the difference between the two implementations would be significant if the p-value is less than 0.05.

Table 4

presents the results of statistical test. The p-value for branch coverage is 0.475, which is greater than 0.05 (significance level). Therefore, we can not reject the null hypothesis. That means we can not show a significant difference in the APFD values for branch coverage of Commons CLI between the replication and the original study.

Similarly, the p-value for line coverage is 0.415, greater than the set significance level. Based on the statistical results, we can not reject the null hypothesis. This implies that we can not show a significant difference in the APFD values for the line coverage of Commons CLI between the replication and the original study.

Finally, the p-value for method coverage is 0.103, based on this result, we can not reject the null hypothesis. Therefore no significant difference in the APFD values for the method coverage of Commons CLI between the replication and the original study.

Coverage p-value 95% Conf. Int.
Branch 0.05 0.475 0.646 - 0.816
Line 0.05 0.415 0.668 - 0.845
Method 0.05 0.103 0.652 - 0.827
Table 4: Statistical results of replication compared to the original study for Commons CLI.

From the t-test results, we can conclude that for all three coverage types (branch, line, and method), we did not find any significant difference between the replication and the original study. Therefore, we can state that the replication experiment did not deviate from the original result to a degree that would lead to the test detecting a significant difference.

Key findings: Concerning RQ2, we compared the replication results of the successful case (i.e., Commons CLI) with the original study’s results. Below are the key findings for RQ2. The statistical test did not detect a significant difference in the APFD values of the replication and the original study concerning the three coverage measures investigated. We conclude that the results of the original study are verifiable for Commons CLI.

6 Discussion

6.1 Lessons learned of replicating artefact-based studies in software testing

We replicated the study presented in [36] with the intent of promoting artefact-based replication studies in software engineering, validating the correctness of the original study, and exploring the possibilities to adopt regression testing research in the industry.

Overall, it is essential to capture and document assumptions and constraints concerning the techniques that are replicated, as well as the conditions for being able to run a replication. We highlight several factors of relevance that were observed.

Conditions concerning System under Test (SuT) complexity: From the replication results, we learned that besides the various constraints, the technique (IRCOV) presented in [36] is replicable for small and medium programs provided the availability of context information. The technique with its current guidelines is difficult to implement with large-size programs because it requires a significant amount of effort to train the model. For example, the restriction of 10-folds, fifty faulty versions for every fold, and 5 to 15 faults in every faulty version would require a substantial effort (approximately 60 hours) to train the model for large-size programs. This limitation can be managed by reducing the number of faulty versions for each fold, but this may degrade the accuracy and increase the training error.

Conditions concerning the characteristics of the test suite: Test cases available with Log4j 2 have low coverage, limiting the chance of correctly training the model and generating a reasonable prioritization order of the test cases. Coverage is one of the primary inputs required for the test case prioritization using the IRCOV model. Another problem we encountered was the presence of already failing test cases in one of the versions of XML security. Test cases are used to calculate the coverage score and similarity scores of the project. If a handful of test cases fail (as in XML security version 2.2.3), wrong coverage information and similarity scores are calculated. This results in the wrong prioritization of test cases as well faulty training of the model (which is used to identify prioritized test cases). Another drawback with failing test cases concerns the use of mutations. If tests are already failing and when mutants are introduced, then the effectiveness is unreliable as tests are already failing because of other issues. Further conditions may be of relevance in studies focusing on different aspects of software testing. Here, we would highlight how important it is to look for these conditions and document them. This is also of relevance for practice, as it demonstrates under which conditions a technique may or may not be successful in practice.

Availability of experimental data for artefact-based test replications: One of the constraints regarding the replicability of the IRCOV technique is the availability of experimental data. For example, the authors of the original study [36] stated that they used in-house built tools to conduct the experiment but, they did not provide any source of these tools, also not including details of the automation tools. Therefore, it took a significant effort to set up the environment to replicate IRCOV with the first program. There are various limitations concerning the data sets and tools required to work with the recommended steps. Regarding data sets, we have recorded the findings in Section 5. These include the compatibility of SIR artefacts. For example, because of various dependencies, we faced difficulties while working with XML security version 1. While working with version 2.2.3 of XML security, we encountered errors in the version. Therefore, we could not collect the coverage information. Ultimately, we were unable to replicate the technique with any of the versions of XML security.

Reflections on mutant generators: In the absence of failure data, the authors of the original study suggested using mutation faults, and they used the MAJOR mutation tool to generate the mutants. In one of our cases (Commons Email), the mutation tool (MAJOR) generated inappropriate mutants that led to the build failure. Therefore, no further progress was possible with this case.

To overcome the difficulty with replication of project 3 (Commons Email), we tried different open-source mutation generators available. Each of these presented various benefits and challenges that are documented in Table 5. After trying out different mutation tools, we learned that among the available options, MAJOR is an appropriate tool for Java programs, as it generates the mutants dynamically.

No Mutation Tool Benefits Challenges
1 Major777https://mutation-testing.org/ (i) Easy to use. (ii) Most commonly used mutant generator. (i) Faulty mutant generated. (ii) Needs upgrade to latest Java versions (iii) Documentation needs improvement.
2 Java888https://cs.gmu.edu/ offutt/mujava/ (i) IDE plugin available (ii) User decides what types of mutants can be generated. (i) Exporting mutants separately is not supported (ii) Does not support latest Java versions (iii) GUI crashes often while generating mutants.
3 Jester999http://jester.sourceforge.net/ Two types of Jester versions, a complete version and a simple version. Latest update is more than 10 years ago. We were unable to generate mutations or start the program despite of following all steps.
4 Jumble101010http://jumble.sourceforge.net/ (i) Support recent Java versions. (ii) Integration with IDE Supported. Unable to generate mutants despite following examples. Latest update was 6 years ago.
5 PIT111111https://pitest.org/ The most recent and complete mutant generator. Mutants are generated and tests are executed. A report is generated for the user. (i) Unable to export the mutants. (ii) Lack of diversity in the mutants. (iii) Each execution produced exact same mutants.
Table 5: Comparison of mutant generators

Reflections on the IRCOV technique: Besides the various limitations highlighted earlier, the IRCOV technique is replicable, and the replication results of the successful case (Commons CLI) show that the original authors’ claim regarding the performance of the IRCOV technique was verifiable. The technique presented in the original study can be valuable from the industry perspective because of its focus on prioritizing test cases detecting faults in less tested code while taking coverage of test cases into account during the prioritization process. It can help the practitioners work with one of their goals (i.e., controlled fault slippage). Looking at regression testing in practice, the practitioners recognize and measure the coverage metric [40]. The only information that needs to be maintained in the companies is failure history. In the presence of actual failure data, we do not need to use the mutants to train the IRCOV model extensively, and we can reduce the number of faulty versions for each fold and the number of folds.

Overall, pursuing the first RQ provided us with a deeper insight into the various aspects and challenges related to external replication. The lessons learned in this pursuit are interesting and to provide recommendations in the context of replication studies in software engineering. From the existing literature, it was revealed that the trend of replication studies in software engineering is not encouraging [9, 10]. The studies report that the number of internal replications is much higher than external replications [4, 10]. While searching the related work, we observed that in the software testing domain, compared to the internal replications, external replications are few in numbers. There could be several reasons for the overall lower number of replication studies in software engineering, but we can reflect on our experiences concerning the external replications as we have undergone an external replication experiment.

One issue we would like to highlight is the substantial effort needed to implement the replication. Replication effort can be substantially reduced with more detailed documentation of the original studies, the availability of appropriate system versions and their dependencies, and the knowledge about prerequisites and assumptions. Better documentation and awareness of conditions may facilitate a higher number of replications in the future.

6.2 General lessons learned for artefact-based replications

Table 6 provides an overview of challenges we encountered during the replications. It lists the possible impact of each challenge on the results of replication, and the table also presents a list of recommendations for researchers. The following provides a brief discussion on the lessons learned in this study.

Challenge Impact Recommendation
Documentation of original experimental setup Replicators have to invest additional effort to understand the context of the study. Original authors need to maintain/publish a comprehensive documentation of experimental setup.
Collaboration with the authors of original studies In the absence of experimental data and support from original authors can make the replication process more complicated. In the event of request from the replicators the authors of the original study provide assistance in the form of essential information regarding the original experiment.
Issues with the open source data sets Replication experiments may fail due to these issues. Open source repositories need to maintained and be up to date.
System under Test (SuT) and tools compatibility issues Any compatibility issue of the tools required to replicate the original experiment can create a bottleneck for the replication. Such tools (e.g., Mutation tools in our case) need to be maintained to make them compatible with new development frameworks. The same applies to the system under test.
Table 6: Recommendations drawn from the challenges/lessons learned

Documenting the original experiment: The authors of the original studies need to maintain and provide comprehensive documentation concerning the actual experiment. The availability of such documents will help the independent replicators understand the original study’s context. In the absence of such documentation, the replicators need to invest more effort to understand the original study’s context. In this regard, we suggest using open source repositories to store and publish the documentation. The documentation may contain the detail of the experimental setup, including the tools used to aid the original experiment, automation scripts (if used/developed any), and the internal and final results of the study. Furthermore, the authors can also include detail about any special requirements or considerations that need to be fulfilled for the successful execution of the experiment.

Collaboration with the original authors: Because of page limits posed by the journals and conferences, every aspect of the study can not be reported in the research paper. Sometimes, the replicators need assistance from the original authors regarding any missing aspect of the study. Therefore, it is essential that in case of any such query from the replicators, the original study’s authors must willingly assist them. Such cooperations can promote replication studies in software engineering. In our opinion, lack of collaboration is one reason for fewer replication studies in software engineering. However, it is important to still conduct the replications as independently as possible due to possible biases (i.e., avoiding to turn an external replication into an internal one).

Maintaining open source repositories: Open-source repositories (one example being SIR) provide an excellent opportunity for researchers to use the data sets while conducting software engineering experiments. A large number of researchers have benefited from these data sets. We learned that some of the data sets available in repositories are outdated and need to be maintained. Such data sets are not helpful, and studies conducted using these data sets would be complex to adopt/replicate. It is therefore essential that authors explicitly state the the versions they used in their own studies. In addition, we recommend that authors of original studies as well as replications ensure that the dependencies or external libraries are stored to avoid that the system under test can not be used in replications.

Tools compatibility: In many cases, the authors need to use open source tools to assist the execution of their experiment. Such tools need to be well maintained and updated. In case of compatibility issues, these tools can hinder the replication process. For example, the study we replicated uses a mutation tool (MAJOR). Although it is one of the best choices among the available options, the tool generated inappropriate mutants for one of our cases due to some compatibility issues. Ultimately, after a significant effort, we had to abandon the replication process for that case. Here, we also would like to highlight that one should document the versions of the tools and libraries used (also including scripts written by the researchers - e.g., in Phython).

Documenting successes and failures in replications: Besides the significance of documenting every aspect of the original experiment, recording every single event of replication (success & failure) is critical for promoting future replications and industry adoptions of research. We recommend storing the replication setups and data in open source repositories and providing the relevant links in the published versions of the articles.

Automation of replication: A key lesson learned during the replication of the original study is that the documentation of the setup and execution of replication could be automated with the help of modern tools and programming languages. This automation will help in reproducing the original results and analysis for researchers reviewing or producing the results from the studies. We have provided programming scripts that describe and documented all the steps (and the consequences of these steps).

7 Conclusions

This article reports the results of a replication of the test case prioritization technique using information retrieval (IR) concepts proposed initially by [36]. We replicated the original study using four Java programs: Commons CLI, XML security, Commons email, and Log4j. We selected two programs from the original study, and the other two were new. We aimed to answer the two research questions. In RQ1, the aim was to see if the technique is replicable, and in RQ2, we aimed to see if the replication results conform to the ones presented in the original study.

We have faced various challenges while pursing RQ1, these challenges are related to the availability of original experimental setup, collaboration with the original authors, system under test, test suites, and compatibility of support tools. We learned that the technique is replicable for small programs subject to the availability of context information. However, it is hard to implement the technique with the larger programs because it requires a substantial effort to train it for a larger program.

To verify the original study’s results (RQ2), we compared the replication results for Commons CLI with the ones presented in the original study. These results validated the original study’s findings as the statistical test confirms no significant difference between the APFD values of the replication and the actual experiment. However, we must say that our results partially conformed with the original study because we could not replicate the technique with all selected artefacts due to missing dependencies, broken test suites, and other reasons highlighted earlier.

The technique can be helpful in the industry context as it prioritizes the test cases that target the less tested code. It can help the practitioners to control fault slippage. However, it needs some improvements in training and validation aspects to scale the technique to the industry context. To support the future replications/adoption of IRCOV, we have automated the IRCOV steps using Python (Jupyter notebook).

We plan to work with more artefacts with actual faults to test the technique’s (IRCOV) effectiveness in the future, and we plan to see the possibilities of scaling it up for larger projects. In addition to that, we want to evaluate our proposed guidelines (under lessons learned) using different studies from industrial contexts.

Author contributions All authors have contributed to every phase of this study, i.e., the conception of the idea, implementation, and manuscript writing. All authors read and approved the final manuscript, and they stand accountable for all aspects of this work’s originality and integrity.


This work has in parts been supported by ELLIIT; the Swedish Strategic Research Area in IT and Mobile Communications.


  • [1] G. Amati (2009) Information retrieval models. In Liu L., Özsu M.T. (eds) Encyclopedia of Database Systems, pp. 1523–1528. External Links: Link Cited by: §2.3, §2.3.
  • [2] J. H. Andrews, L. C. Briand, and Y. Labiche (2005) Is mutation an appropriate tool for testing experiments?. In Proceedings of the 27th international conference on Software engineering, pp. 402–411. Cited by: §4.6.2.
  • [3] A. Bajaj and O. P. Sangwan (2019)

    A systematic literature review of test case prioritization using genetic algorithms

    IEEE Access 7, pp. 126355–126375. Cited by: §2.1.
  • [4] R. M. Bezerra, F. Q. da Silva, A. M. Santana, C. V. Magalhaes, and R. E. Santos (2015) Replication of empirical studies in software engineering: an update of a systematic mapping study. In 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pp. 1–4. Cited by: §2.2, §6.1.
  • [5] N. bin Ali, E. Engström, M. Taromirad, M. R. Mousavi, N. M. Minhas, D. Helgesson, S. Kunze, and M. Varshosaz (2019) On the search for industry-relevant regression testing research. Empirical Software Engineering, pp. 1–36. Cited by: §1, §2.1, §4.2.1.
  • [6] J. C. Carver (2010) Towards reporting guidelines for experimental replications: a proposal. In 1st international workshop on replication in empirical software engineering, Vol. 1, pp. 1–4. Cited by: §4.
  • [7] C. Catal and D. Mishra (2013) Test case prioritization: a systematic mapping study. Software Quality Journal 21 (3), pp. 445–478. Cited by: §2.1.
  • [8] C. Catal (2012) On the application of genetic algorithms for test case prioritization: a systematic literature review. In Proceedings of the 2nd international workshop on Evidential assessment of software technologies, pp. 9–14. Cited by: §2.1.
  • [9] M. Cruz, B. Bernárdez, A. Durán, J. A. Galindo, and A. Ruiz-Cortés (2019) Replication of studies in empirical software engineering: a systematic mapping study, from 2013 to 2018. IEEE Access 8, pp. 26773–26791. Cited by: §1, §6.1.
  • [10] F. Q. Da Silva, M. Suassuna, A. C. C. França, A. M. Grubb, T. B. Gouveia, C. V. Monteiro, and I. E. dos Santos (2014) Replication of empirical studies in software engineering research: a systematic mapping study. Empirical Software Engineering 19 (3), pp. 501–557. Cited by: 3rd item, 4th item, §1, §1, §1, §2.2, §6.1.
  • [11] O. Dahiya and K. Solanki (2018) A systematic literature study of regression test case prioritization approaches. International Journal of Engineering & Technology 7 (4), pp. 2184–2191. Cited by: §2.1.
  • [12] C. V. de Magalhães, F. Q. da Silva, R. E. Santos, and M. Suassuna (2015) Investigations about replication of empirical studies in software engineering: a systematic mapping study. Information and Software Technology 64, pp. 76–101. Cited by: 2nd item, 3rd item.
  • [13] H. Do, S. Elbaum, and G. Rothermel (2005) Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empirical Software Engineering 10 (4), pp. 405–435. Cited by: §1, §1.
  • [14] H. Do, S. Mirarab, L. Tahvildari, and G. Rothermel (2010) The effects of time constraints on test case prioritization: a series of controlled experiments. IEEE Transactions on Software Engineering 36 (5), pp. 593–617. Cited by: §1, §2.1, §3, §4.4.
  • [15] H. Do, G. Rothermel, and A. Kinneer (2004) Empirical studies of test case prioritization in a junit testing environment. In 15th international symposium on software reliability engineering, pp. 113–124. Cited by: §1, §3, §3.
  • [16] H. Do and G. Rothermel (2006) On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Transactions on Software Engineering 32 (9), pp. 733–752. Cited by: §1, §3, §3, §4.6.2.
  • [17] J. Du Prel, G. Hommel, B. Röhrig, and M. Blettner (2009) Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications. Deutsches Ärzteblatt International 106 (19), pp. 335. Cited by: §5.2.
  • [18] E. D. Ekelund and E. Engström (2015) Efficient regression testing based on test history: an industrial evaluation. In Proceedings of IEEE International Conference on Software Maintenance and Evolution, ICSME, pp. 449–457. Cited by: §2.1.
  • [19] S. Elbaum, A. G. Malishevsky, and G. Rothermel (2002) Test case prioritization: a family of empirical studies. IEEE transactions on software engineering 28 (2), pp. 159–182. Cited by: §2.1.
  • [20] E. Engström, P. Runeson, and M. Skoglund (2010) A systematic review on regression test selection techniques. Information & Software Technology 52 (1), pp. 14–30. Cited by: §2.1, §2.1.
  • [21] E. Engström and P. Runeson (2010) A qualitative survey of regression testing practices. In Proceedings of the 11th International Conference on Product-Focused Software Process Improvement PROFES, pp. 3–16. Cited by: §2.1, §2.1.
  • [22] H. Fang, T. Tao, and C. Zhai (2004)

    A formal study of information retrieval heuristics

    In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 49–56. Cited by: §2.3, §2.3.
  • [23] M. Felderer and E. Fourneret (2015) A systematic classification of security regression testing approaches. International Journal on Software Tools for Technology Transfer 17 (3), pp. 305–319. Cited by: §2.1.
  • [24] J. D. Gibbons (1993) Location tests for single and paired samples (sign test and wilcoxon signed rank test). SAGE Publications: Thousand Oaks, CA, USA. Cited by: §4.4.
  • [25] M. J. Harrold and A. Orso (2008) Retesting software during development and maintenance. In Proceedings of the Frontiers of Software Maintenance Conference, pp. 99–108. Cited by: §2.1.
  • [26] M. Hasnain, I. Ghani, M. F. Pasha, I. H. Malik, and S. Malik (2019) Investigating the regression analysis results for classification in test case prioritization: a replicated study. International Journal of Internet, Broadcasting and Communication 11 (2), pp. 1–10. Cited by: §3, §3.
  • [27] M. Irshad (2021) Automation scripts to replicate ircov. GitHub. Note: https://github.com/MohsinIr84/replicationStudy/ Cited by: §4.3.3, §4.5, §5.1.
  • [28] ISO/IEC/IEEE (2017-08) International standard - systems and software engineering–vocabulary. ISO/IEC/IEEE 24765:2017(E) (), pp. 1–541. External Links: Document Cited by: §2.1.
  • [29] M. Ivarsson and T. Gorschek (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empirical Software Engineering 16 (3), pp. 365–395. Cited by: §4.2.1.
  • [30] N. Juristo and O. S. Gómez (2012) Replication of software engineering experiments. In Empirical Software Engineering and Verification: International Summer Schools, LASER 2008-2010, Elba Island, Italy, Revised Tutorial Lectures, pp. 60–88. External Links: ISBN 978-3-642-25231-0, Document, Link Cited by: §2.2.
  • [31] R. Just, F. Schweiggert, and G. M. Kapfhammer (2011) MAJOR: an efficient and extensible tool for mutation analysis in a java compiler. In 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), pp. 612–615. Cited by: §4.2.2, §4.3.3.
  • [32] R. Just (2014) The major mutation framework: efficient and scalable mutation analysis for java. In Proceedings of the 2014 international symposium on software testing and analysis, pp. 433–436. Cited by: §4.2.2, §4.3.3.
  • [33] R. Kazmi, D. N. A. Jawawi, R. Mohamad, and I. Ghani (2017) Effective regression test case selection: A systematic literature review. ACM Comput. Surv. 50 (2), pp. 29:1–29:32. Cited by: §2.1, §2.1.
  • [34] M. Khatibsyarbini, M. A. Isa, D. N. Jawawi, and R. Tumeng (2018) Test case prioritization approaches in regression testing: a systematic literature review. Information and Software Technology 93, pp. 74–93. Cited by: §2.1, §2.1.
  • [35] J. L. Krein and C. D. Knutson (2010) A case for replication: synthesizing research methodologies in software engineering. In RESER2010: proceedings of the 1st international workshop on replication in empirical software engineering research, pp. 1–10. Cited by: 1st item, §1, §2.2, §2.2.
  • [36] J. Kwon, I. Ko, G. Rothermel, and M. Staats (2014) Test case prioritization based on information retrieval concepts. In 2014 21st Asia-Pacific Software Engineering Conference, Vol. 1, pp. 19–26. Cited by: item 2, §1, §2.3, §4.1, §4.2.1, §4.2.1, §4.2.2, §4.2.2, §4.2.2, §4.2.3, §4.2.3, §4.3.3, §4.3.3, §4.3.3, §4.3.3, §4.3.3, §4.3.3, §4.4, Table 1, §5.1, §5.1, §6.1, §6.1, §6.1, §7.
  • [37] J. A. P. Lima and S. R. Vergilio (2020) Test case prioritization in continuous integration environments: a systematic mapping study. Information and Software Technology 121, pp. 106268. Cited by: §2.1, §4.2.1.
  • [38] N. M. Minhas and M. Irshad (2021) Data set used in the replication of an ir based test case prioritization techniques (ircov). Vol. V1, Mendeley Data. Note: https://data.mendeley.com/drafts/ccnzpxng54 External Links: Document Cited by: §5.1, §5.1, §5.1.
  • [39] N. M. Minhas, K. Petersen, N. Ali, and K. Wnuk (2017) Regression testing goals-view of practitioners and researchers. In 24th Asia-Pacific Software Engineering Conference Workshops (APSECW), pp. 25–32. Cited by: §2.1, §4.2.1, §4.2.1.
  • [40] N. M. Minhas, K. Petersen, J. Börstler, and K. Wnuk (2020) Regression testing for large-scale embedded software development – exploring the state of practice. Information and Software Technology 120, pp. 106254. External Links: ISSN 0950-5849, Document Cited by: §2.1, §2.1, §4.2.1, §4.2.1, §6.1.
  • [41] J. F. S. Ouriques, E. G. Cartaxo, and P. D. Machado (2018) Test case prioritization techniques for model-based testing: a replicated study. Software Quality Journal 26 (4), pp. 1451–1482. Cited by: §3.
  • [42] R. Pan, M. Bagherzadeh, T. A. Ghaleb, and L. Briand (2022)

    Test case selection and prioritization using machine learning: a systematic literature review

    Empirical Software Engineering 27 (2), pp. 1–43. Cited by: §4.2.1.
  • [43] Q. Peng, A. Shi, and L. Zhang (2020) Empirically revisiting and enhancing ir-based test-case prioritization. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 324–336. Cited by: §4.2.1.
  • [44] M. Pittelkow, R. Hoekstra, J. Karsten, and D. van Ravenzwaaij (2021) Replication target selection in clinical psychology: a bayesian and qualitative reevaluation.. Clinical Psychology: Science and Practice 28 (2), pp. 210. Cited by: §4.2.1.
  • [45] D. Qiu, B. Li, S. Ji, and H. K. N. Leung (2014) Regression testing of web service: A systematic mapping study. ACM Comput. Surv. 47 (2), pp. 21:1–21:46. Cited by: §2.1.
  • [46] A. Rainer and S. Beecham (2008) A follow-up empirical evaluation of evidence based software engineering by undergraduate students. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, pp. 78–87. Cited by: §2.1.
  • [47] A. Rainer, D. Jagielska, and T. Hall (2005) Software engineering practice versus evidence-based software engineering research. In Proceedings of the ACM Workshop on Realising evidence-based software engineering (REBSE ’05), pp. 1–5. External Links: ISBN 1-59593-121-X, Link, Document Cited by: §2.1.
  • [48] T. Roelleke (2013) Information retrieval models: foundations and relationships. Synthesis Lectures on Information Concepts, Retrieval, and Services 5 (3), pp. 1–163. Cited by: §1, §2.3.
  • [49] R. H. Rosero, O. S. Gómez, and G. D. R. Rafael (2016) 15 years of software regression testing techniques - A survey. Int. J. Software Eng. Knowl. Eng. 26 (5), pp. 675–690. Cited by: §2.1.
  • [50] R. K. Saha, L. Zhang, S. Khurshid, and D. E. Perry (2015) An information retrieval approach for regression test prioritization based on program changes. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1, pp. 268–279. Cited by: §4.2.1.
  • [51] A. Santos, S. Vegas, M. Oivo, and N. Juristo (2021) Comparing the results of replications in software engineering. Empirical Software Engineering 26 (2), pp. 1–41. Cited by: §1.
  • [52] M. Shepperd, N. Ajienka, and S. Counsell (2018) The role and value of replication in empirical software engineering results. Information and Software Technology 99, pp. 120–132. Cited by: §2.2.
  • [53] F. J. Shull, J. C. Carver, S. Vegas, and N. Juristo (2008) The role of replications in empirical software engineering. Empirical software engineering 13 (2), pp. 211–218. Cited by: §1, §2.1, §2.2, §2.2, §4.1, §4.6.2.
  • [54] Y. Singh, A. Kaur, B. Suri, and S. Singhal (2012) Systematic literature review on regression test prioritization techniques. Informatica (Slovenia) 36 (4), pp. 379–408. Cited by: §2.1, §4.2.1, §4.2.1.
  • [55] D. F. Williamson, R. A. Parker, and J. S. Kendrick (1989) The box plot: a simple visual method to interpret data. Annals of internal medicine 110 (11), pp. 916–921. Cited by: §4.4.
  • [56] C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén (2012) Experimentation in software engineering. Springer Science & Business Media. Cited by: §4.6.3.
  • [57] S. Yadla, J. H. Hayes, and A. Dekhtyar (2005) Tracing requirements to defect reports: an application of information retrieval techniques. Innovations in Systems and Software Engineering 1 (2), pp. 116–124. Cited by: §2.3.
  • [58] S. Yoo and M. Harman (2012) Regression testing minimization, selection and prioritization: a survey. Softw. Test., Verif. Reliab. 22 (2), pp. 67–120. Cited by: §1, §2.1, §2.1, §4.2.1, §4.2.1.
  • [59] A. Zarrad (2015) A systematic review on regression testing for web-based applications. JSW 10 (8), pp. 971–990. Cited by: §2.1.