Testing Scientific Software: A Systematic Literature Review

04/05/2018
by   Upulee Kanewala, et al.
Colorado State University
0

Context: Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective: This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method: We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results: We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions: Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/22/2020

Code Smells and Refactoring: A Tertiary Systematic Review of Challenges and Observations

In this paper, we present a tertiary systematic literature review of pre...
03/01/2020

Experience in engineering of scientific software: The case of an optimization software for oil pipelines

Development of scientific and engineering software is usually different ...
08/30/2017

Choreography in the embedded systems domain: A systematic literature review

Software companies that develop their products on a basis of service-ori...
01/05/2019

Software Testing Process Models Benefits & Drawbacks: a Systematic Literature Review

Context: Software testing plays an essential role in product quality imp...
09/12/2019

A Survey of DevOps Concepts and Challenges

DevOps is a collaborative and multidisciplinary organizational effort to...
04/03/2021

Alternatives for Testing of Context-Aware Contemporary Software Systems in industrial settings: Results from a Rapid review

Context: Context-aware contemporary software systems (CACSS) are mainstr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Scientific software is widely used in science and engineering fields. Such software plays an important role in critical decision making in fields such as the nuclear industry, medicine and the military Sanders and Kelly (2008a, b). For example, in nuclear weapons simulations, code is used to determine the impact of modifications, since these weapons cannot be field tested Post and Kendall (2004). Climate models make climate predictions and assess climate change Drake et al. (2005). In addition, results from scientific software are used as evidence in research publications Sanders and Kelly (2008b). Due to the complexity of scientific software and the required specialized domain knowledge, scientists often develop these programs themselves or are closely involved with the development Pipitone and Easterbrook (2012); L.S. Chin and Greenough (2007); Segal (2008a); Carver et al. (2007). But scientist developers may not be familiar with accepted software engineering practices Segal (2008a); Sanders and Kelly (2008a). This lack of familiarity can impact the quality of scientific software Easterbrook (2010).

Software testing is one activity that is impacted. Due to the lack of systematic testing of scientific software, subtle faults can remain undetected. These subtle faults can cause program output to change without causing the program to crash. Software faults such as one-off errors have caused the loss of precision in seismic data processing programs Hatton (1997). Software faults have compromised coordinate measuring machine (CMM) performance Abackerli et al. (2010). In addition, scientists have been forced to retract published work due to software faults Miller (2006). Hatton et al. found that several software systems written for geoscientists produced reasonable yet essentially different results Hatton and Roberts (1994). There are reports of scientists who believed that they needed to modify the physics model or develop new algorithms, but later discovered that the real problems were small faults in the code Dubois (2012).

We define scientific software broadly as software used for scientific purposes. Scientific software is mainly developed to better understand or make predictions about real world processes. The size of this software ranges from 1,000 to 100,000 lines of code Sanders and Kelly (2008b). Developers of scientific software range from scientists who do not possess any software engineering knowledge to experienced professional software developers with considerable software engineering knowledge.

To develop scientific software, scientists first develop discretized models. These discretized models are then translated into algorithms that are then coded using a programming language. Faults can be introduced during all of these phases Dahlgren and Devanbu (2005). Developers of scientific software usually perform validation to ensure that the scientific model is correctly modeling the physical phenomena of interest Kelly et al. (2009); Murphy et al. (2011). They perform verification to ensure that the computational model is working correctly Kelly et al. (2009), using primarily mathematical analyses Post and Kendall (2004). But scientific software developers rarely perform systematic testing to identify faults in the code Kelly and Sanders (2008); Murphy et al. (2011); Hook and Kelly (2009); Sanders and Kelly (2008a). Farrell et al. show the importance of performing code verification to identify differences between the code and the discretized model Farrell et al. (2011). Kane et al. found that automated testing is fairly uncommon in biomedical software development Kane et al. (2006). In addition, Reupke et al. discovered that many of the problems found in operational medical systems are due to inadequate testing Reupke et al. (1988). Sometimes this lack of systematic testing is caused by special testing challenges posed by this software Easterbrook (2010).

This work reports on a Systematic Literature Review (SLR) that identifies the special challenges posed by scientific software and proposes solutions to overcome these challenges. In addition, we identify unsolved problems related to testing scientific software.

An SLR is a “means of evaluating and interpreting all available research relevant to a particular research question or topic area or phenomenon of interest” Kitchenham (2004). The goal of performing an SLR is to methodically review and gather research results for a specific research question and aid in developing evidence-based guidelines for the practitioners Kitchenham et al. (2009). Due to the systematic approach followed when performing an SLR, the researcher can be confident that she has located the required information as much as possible.

Software engineering researchers have conducted SLRs in a variety of software engineering areas. Walia et al. Walia and Carver (2009)

conducted an SLR to identify and classify software requirement errors. Engström

et al. Engström et al. (2010) conducted an SLR on empirical evaluations of regression test selection techniques with the goal of “finding a basis for further research in a joint industry-academia research project”. Afzal et al. Afzal et al. (2009) carried out an SLR on applying search-based testing for performing non-functional testing. Their goal is to “examine existing work into non-functional search-based software testing”. While these SLRs are not restricted to software in a specific domain, we focus on scientific software, an area that has received less attention than application software. Further when compared to Engström et al. or Afzal et al., we do not restrict our SLR to a specific testing technique.

The overall goal Kitchenham et al. (2009) of our SLR is to identify specific challenges faced when testing scientific software, how the challenges have been met, and any unsolved challenges. We developed a set of research questions based on this overall goal to guide the SLR process. Then we performed an extensive search to identify publications that can help to answer these research questions. Finally, we synthesized the gathered information from the selected studies to provide answers to our research questions.

This SLR identifies two categories of challenges in scientific software testing. The first category are challenges that are due to the characteristics of the software itself such as the lack of an oracle. The second category are challenges that occur because scientific software is developed by scientists and/or scientists play leading roles in scientific software development projects, unlike application software development where software engineers play leading roles. We identify techniques used to test scientific software including techniques that can help to overcome oracle problems and test case creation/selection challenges. In addition, we describe the limitations of these techniques and open problems.

This paper is organized as follows: Section 2 describes the SLR process and how we apply it to find answer to our research questions. We report the findings of the SLR in Section 3. Section 4 contains the discussion on the findings. Finally we provide conclusions and describe future work in Section 5.

2 Research Method

We conducted our SLR following the published guidelines by Kitchenham Kitchenham (2004). The activities performed during an SLR can be divided into three main phases: (1) planning the SLR, (2) conducting the review and (3) reporting the review. We describe the tasks performed in each phase below.

2.1 Planning the SLR

2.1.1 Research Questions

The main goal of this SLR is to identify specific challenges faced when testing scientific software, how the challenges have been met, and any unsolved challenges. We developed the following research questions to achieve our high level goal:

  1. How is scientific software defined in the literature?

  2. Are there special characteristics or faults in scientific software or its development that make testing difficult?

  3. Can we use existing testing methods (or adapt them) to test scientific software effectively?

  4. Are there challenges that could not be met by existing techniques?

2.1.2 Formulation and validation of the review protocol

The review protocol specifies the methods used to carry out the SLR. Defining the review protocol prior to conducting the SLR can reduce researcher bias Kitchenham and Charters (2007). In addition, our review protocol specifies source selection procedures, search process, quality assessment criteria and data extraction strategies.

Source selection and search process: We used the Google Scholar, IEEE Xplore, and ACM Digital Library databases since they include journals and conferences focusing on software testing as well as computational science and engineering. Further, these databases provide mechanisms to perform key word searches. We did not specify a fixed time frame when conducting the search. We conducted the search in January 2013. Therefore this SLR includes studies that were published before January 2013. We did not search for specific journals/conferences since an initial search found relevant studies published in journals such as Geoscientific Model Development111http://www.geoscientific-model-development.net/ that we were not previously familiar with. In addition, we examined relevant studies that were referenced by the selected primary studies.

We searched the three databases identified above using a search string that included the important key words in our four research questions. Further, we augmented the key words with their synonyms, producing the following search string:

(((challenges OR problems OR issues OR characteristics) OR (technique OR methods OR approaches)) AND (test OR examine)) OR (error OR fault OR defect OR mistake OR problem OR imperfection OR flaw OR failure) AND (“(scientific OR numeric OR mathematical OR floating point) AND (Software OR application OR program OR project OR product)”)

Study selection procedure: We systematically selected the primary studies by applying the following three steps.

  1. We examined the paper titles to remove studies that were clearly unrelated to our search focus.

  2. We reviewed the abstracts and key words in the remaining studies to select relevant studies. In some situations an abstract and keywords did not provide enough information to determine whether a study is relevant. In such situations, we reviewed the conclusions.

  3. We filtered the remaining studies by applying the inclusion/exclusion criteria given in Table 1. Studies selected from this final step are the initial primary studies for the SLR.

We examined the reference lists of the initial primary studies to identify additional studies that are relevant to our search focus.

Inclusion criteria Exclusion criteria
  1. Papers that describe characteristics of scientific software that impact testing.

  2. Case studies or surveys of scientific software testing experiences.

  3. Papers that analyze characteristics of scientific software testing including case studies and experience reports.

  4. Papers describing commonly occurring faults in scientific software.

  5. Papers that describe testing methods used for scientific software and provide a sufficient evaluation of the method used.

  6. Experience reports or case studies describing testing methods used for scientific software.

  1. Papers that present opinions without sufficient evidence supporting the opinion.

  2. Studies not related to the research questions.

  3. Studies in languages other than English.

  4. Papers presenting results without providing supporting evidence.

  5. Preliminary conference papers of included journal papers.

Table 1: Inclusion and exclusion criteria

Quality assessment checklist: We evaluated the quality of the selected primary studies using selected items from the quality checklists provided by Kitchenham and Charters Kitchenham and Charters (2007). Table 2 and Table 3 show the quality checklists that we used for quantitative and qualitative studies respectively. When creating the quality checklist for quantitative studies, we selected quality questions that would evaluate the four main stages of a quantitative study: design, conduct, analysis and conclusions  Kitchenham and Charters (2007).

Survey Case study Experiment
G1: Are the study aims clearly stated?
S1: Was the method for collecting the sample data specified (e.g. postal, interview, web-based)? N/A N/A
S2: Is there a control group? N/A E1: Is there a control group?
N/A N/A E2: Were the treatments randomly allocated?
G2: Are the data collection methods adequately described?
G3: Was there any statistical assessment of results?
S3: Do the observations support the claims? C1: Is there enough evidence provided to support the claims? E3: Is there enough evidence provided to support the claims?
G4: Are threats to validity and/or limitations reported?
G5: Can the study be replicated?
Table 2: Quality assessment for quantitative studies
Quality assessment questions
  1. A: Are the study aims clearly stated?

  2. B: Does the evaluation address its stated aims and purpose?

  3. C: Is sample design/target selection of cases/documents defined?

  4. D: Is enough evidence provided to support the claims?

  5. E: Can the study be replicated?

Table 3: Quality assessment for qualitative studies

Data extraction strategy: Relevant information for answering the research questions needed to be extracted from the selected primary studies. We used data extraction forms to make sure that this task was carried out in a accurate and consistent manner. Table 4 shows the data extraction from that we used.

Search focus Data Item Description
General Identifier Reference number given to the article
Bibliography Author, year, Title, source
Type of article journal/conference/tech. report
Study aims Aims or goals of the study
Study design controlled experiment/survey/etc.
RQ1 Definition Definition for scientific software
Examples Examples of scientific software
RQ2 Challenge/problem Challenges/problems faced when testing scientific software
Fault description Description of the fault found
Causes What caused the fault?
RQ3/RQ4 Testing method Description of the method used
Existing/new/extension Whether the testing method is new, existing or modification to an existing method
Challenge/problem The problem/challenge that it provides the answer to
Faults/failures found Description of the faults/ failures found by the method
Evidence Evidence for the effectiveness of the method of finding faults
Limitations Limitations of the method
Table 4: Data extraction form

2.2 Conducting the review

2.2.1 Identifying relevant studies and primary studies

The key word based search produced more than 6000 hits. We first examined paper titles to remove any studies that are not clearly related to the research focus. Then we used the abstract, key words and the conclusion to eliminate additional unrelated studies. After applying these two steps, 94 studies remained. We examined these 94 studies and applied the inclusion/exclusion criteria in Table 1 to select 49 papers as primary studies for this SLR.

Further, we applied the same selection steps to the reference lists of the selected 49 primary studies to find additional primary studies that are related to the research focus. We found 13 studies that are related to our research focus that were not already included in the initial set of primary studies. Thus, we used a total of 62 papers as primary studies for the SLR. The selected primary studies are listed in Tables 5 and  6. Table 7 lists the publication venues of the selected primary papers. The International Workshop on Software Engineering for Computational Science and Engineering and the Journal of Computing in Science & Engineering published the greatest number of primary studies.

Study Ref. Study focus RQ1 RQ2 RQ3 RQ4
No. No.
PS1 Abackerli et al. (2010) A case study on testing software packages used in metrology
PS2 Ackroyd et al. (2008) Software engineering tasks carried out during scientific software development
PS3 Bagnara et al. (2013) Test case generation for floating point programs using symbolic execution
PS4 Carver et al. (2007) Case studies of scientific software development projects
PS5 Carver and Hochstein (2011) Survey on computational scientists and engineers
PS6 Chen et al. (2002)

Applying metamorphic testing to programs on partial differential equations

PS7 Chen et al. (2009) Case studies on applying metamorphic testing for bioinformatics programs
PS8 Clune and Rood (2011) Case studies on applying test driven development for climate models
PS9 Cox and Harris (1999) Using reference data sets for testing scientific software
PS10 Dahlgren (2007) Effectiveness of different interface contract enforcement policies for scientific components
PS11 Dahlgren and Devanbu (2005) Partial enforcement of assertions for scientific software components
PS12 Davis and Weyuker (1981) Using pseudo-oracles for testing programs without oracles
PS13 Drake et al. (2005) A case study on developing a climate system model
PS14 Dubois (2012) A tool for automating the testing of scientific simulations
PS15 Easterbrook (2010) Discussion on software challenges faced in climate modeling program development
PS16 Easterbrook and Johns (2009) Ethnographic study of climate scientists who develop software
PS17 Eddins (2009) A unit testing framework for MATLAB programs
PS18 Farrell et al. (2011) A framework for automated continuous verification of numerical simulations
PS19 Hannay et al. (2009) Results of a survey conducted to identify how scientists develop and use software in their research
PS20 Hatton (1997) Experiments to analyze the accuracy of scientific software through static analysis and comparisons with independent implementations of the same algorithm
PS21 Hatton and Roberts (1994) N-version programming experiment conducted on scientific software
PS22 Heroux et al. (2007) Applying software quality assurance practices in a scientific software project
PS23 Heroux and Willenbring (2009) Software engineering practices suitable for scientific software development teams identified through a case study
PS24 Hochstein and Basili (March) Software development process of five large scale computational science software
PS25 Hook and Kelly (2009) Evaluating the effectiveness of using a small number of carefully selected test cases for testing scientific software
PS26 Kane et al. (2006) Qualitative study of agile development approaches for creating and maintaining bio-medical software
PS27 Kelly et al. (2011a) Comparing the effectiveness of random test cases and designed test cases for detecting faults in scientific software
PS28 Kelly et al. (2009) Useful software engineering techniques for computational scientists obtained through experience of scientists who had success
PS29 Kelly and Sanders (2008) Quality assessment practices of scientists that develop computational software
PS30 Kelly et al. (2011b) How software engineering research can provide solutions to challenges found by scientists developing software
Table 5: Selected Primary Studies (Part 1)
Study Ref. Study focus RQ1 RQ2 RQ3 RQ4
No. No.
PS31 Kelly et al. (2011c) A case study of applying testing activities to scientific software
PS32 L.S. Chin and Greenough (2007) A survey on testing tools for scientific programs written in FORTRAN
PS33 Lane and Gobet (2012) A case study on using a three level testing architecture for testing scientific programs
PS34 Mayer and Guderlei (2006) Applying metamorphic testing for image processing programs
PS35 Mayer et al. (2005) Using statistical oracles to test image processing applications
PS36 Meinke and Niu (2010) A learning-based method for automatic generation of test cases for numerical programs
PS37 Morris (2008) Lessons learned through code reviews of scientific programs
PS38 Morris and Segal (2009) Challenges faced by software engineers developing software for scientists in the field of molecular biology
PS39 Murphy et al. (2007b)

A framework for randomly generating large data sets for testing machine learning applications

PS40 Murphy et al. (2007a) Methods for testing machine learning algorithms
PS41 Murphy et al. (2008) Applying metamorphic testing for testing machine learning applications
PS42 Murphy et al. (2011) Testing health care simulation software using metamorphic testing
PS43 Nguyen-Hoan et al. (2010) Survey of scientific software developers
PS44 Pipitone and Easterbrook (2012) Analysis of quality of climate models in terms of defect density
PS45 Pitt-Francis et al. (2008) Applying agile development process for developing computational biology software
PS46 Post and Kendall (2004) Lessons learned from scientific software development projects
PS47 Remmel et al. (2012) Applying variability modeling for selecting test cases when testing scientific frameworks with large variability
PS48 Reupke et al. (1988) Medical software development and testing
PS49 Sanders and Kelly (2008b) A survey to identify the characteristics of scientific software development
PS50 Sanders and Kelly (2008a) Challenges faced when testing scientific software identified through interviews carried out with scientists who develop/use scientific software
PS51 Segal (2008b) Study of problems arising when scientists and software engineers work together to develop scientific software
PS52 Segal (2007) Problems in scientific software development identified through case studies in different fields
PS53 Segal (2009b) Challenges faced by software engineers who develop software for scientists
PS54 Segal (2009a) Case studies on professional end-user development culture
PS55 Segal (2008a) A model of scientific software development identified through multiple field studies of scientific software development
PS56 Segal (2005) A case study on applying traditional document-led development methodology for developing scientific software
PS57 Sletholt et al. (2012) Literature review and case studies on how scientific software development matches agile practices and the effects of using agile practices in scientific software development
PS58 Smith (2007) A test harness for numerical programs
PS59 Smith et al. (2004) A testing framework for conducting regression testing of scientific software
PS60 Vilkomir et al. (2008) A method for test case generation of scientific software when there are dependencies between input parameters
PS61 Weyuker (1982) Testing non-testable programs
PS62 Wood and Kleb (2003) Culture clash when applying extreme programming to develop scientific software
Table 6: Selected Primary Studies (Part 2)
Publication venue Type Count %
International Workshop on Software Engineering for Computational Science and Engineering Workshop 7 11.3
Computing in Science & Engineering Journal 7 11.3
IEEE Software Journal 5 8.1
BMC Bioinformatics Journal 2 3.2
Geoscientific Model Development Journal 2 3.2

International Conference on Software Engineering and Knowledge Engineering

Conference 2 3.2
International Journal of High Performance Computing Applications Journal 2 3.2
Lecture Notes in Computer Science Book chapter 2 3.2
Journal of the Brazilian Society of Mechanical Sciences and Engineering Journal 1 1.6
International Conference on Software Testing, Verification and Validation Conference 1 1.6
International Conference on Software Engineering Conference 1 1.6
Sandia National Laboratories-Technical report Tech. report 1 1.6
Computer Software and Applications Conference Conference 1 1.6
Analytica Chimica Acta Journal 1 1.6
International Workshop on Software Engineering for High Performance Computing System Applications Workshop 1 1.6
ACM ’81 Conference Conference 1 1.6
FSE/SDP Workshop on Future of Software Engineering Research Workshop 1 1.6
IEEE Computational Science & Engineering Journal 1 1.6
IEEE Transactions on Software Engineering Journal 1 1.6
EUROMICRO International Conference on Parallel, Distributed and Network-Based Processing Conference 1 1.6
IEEE Computer Journal 1 1.6
Journal of Computational Science Journal 1 1.6
Rutherford Appleton Laboratory-Technical report Tech. report 1 1.6

Journal of Experimental & Theoretical Artificial Intelligence

Journal 1 1.6
International Conference on Quality Software Conference 1 1.6
Lecture Notes in Informatics Book chapter 1 1.6
International Conference on e-Science Conference 1 1.6
International Workshop on Random testing Conference 1 1.6
Workshop on Software Engineering in Health Care Workshop 1 1.6
International Symposium on Empirical Software Engineering and Measurement Conference 1 1.6
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences Journal 1 1.6
Symposium on the Engineering of Computer-Based Medical Systems Conference 1 1.6
Conference for the Association for Software Testing Conference 1 1.6
Annual Meeting of the Psychology of Programming Interest Group Conference 1 1.6
Symposium on Visual Languages and Human-Centric Computing Conference 1 1.6
Computer Supported Cooperative Work Journal 1 1.6
Empirical Software Engineering Journal 1 1.6
Grid-Based Problem Solving Environments Journal 1 1.6
Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Conference 1 1.6
International Conference on Computational Science Conference 1 1.6
The Computer Journal Journal 1 1.6
Table 7: Publication venues of primary studies

2.2.2 Data extraction and quality assessment

We used the data extraction form in Table 4 to extract data from the primary studies. Many primary studies did not answer all of the questions in the data extraction form. We extracted the important information provided by the primary studies using the data extraction form. Then, depending on the type of the study, we applied the quality assessment questions in Table 2 or Table 3 to each primary study.

We provided ‘yes’ and ‘no’ answers to our quality assessment questions. We used a binary scale since we were not interested in providing a quality score for the studies Dyba et al. (2007). Table 8 shows the results of the quality assessment for quantitative primary studies. All the quantitative primary studies answered ‘yes’ to the quality assessment question G1 (Are the study aims clearly stated?). Most of the quantitative primary studies answered ‘yes’ to the quality assessment questions G2 (Are the data collection methods adequately described) and G5 (Can the study be replicated?). Table 9 shows the results of the quality assessment for qualitative primary studies. All of the qualitative primary studies answered ‘yes’ to the quality assessment question A (Are the study aims clearly stated?) and B (Does the evaluation address its stated aims and purpose?). Most of the qualitative primary studies answered ‘yes’ to the quality assessment question D (Is enough evidence provided to support the claims?).

Ref. No. G1 S1 S2 E1 E2 G2 G3 S3 C1 E3 G4 G5
Drake et al. (2005) yes N/A N/A N/A N/A no no N/A yes N/A no no
Murphy et al. (2011) yes N/A N/A no no yes no N/A N/A yes yes yes
Murphy et al. (2007b) yes N/A N/A no no yes no N/A N/A yes yes yes
Kelly et al. (2011a) yes N/A N/A yes no yes yes N/A N/A yes no yes
Pipitone and Easterbrook (2012) yes N/A N/A N/A N/A yes no N/A yes N/A yes yes
Abackerli et al. (2010) yes N/A N/A N/A N/A yes no N/A yes N/A no yes
Cox and Harris (1999) yes N/A N/A no no yes yes N/A N/A yes no yes
Dahlgren and Devanbu (2005) yes N/A N/A yes no yes no N/A N/A yes no yes
Carver and Hochstein (2011) yes yes no N/A N/A yes yes yes N/A N/A no yes
Nguyen-Hoan et al. (2010) yes yes no N/A N/A yes no yes N/A N/A yes yes
Pitt-Francis et al. (2008) yes N/A N/A N/A N/A yes no N/A yes N/A no yes
Remmel et al. (2012) yes N/A N/A N/A N/A yes no N/A yes N/A no yes
Hook and Kelly (2009) yes N/A N/A yes no yes no N/A N/A yes no yes
Dahlgren (2007) yes N/A N/A yes no yes no N/A N/A yes no yes
Mayer and Guderlei (2006) yes N/A N/A no no yes no N/A N/A yes no yes
Chen et al. (2009) yes N/A N/A N/A N/A yes no N/A yes N/A yes yes
Chen et al. (2002) yes N/A N/A N/A N/A yes no N/A yes N/A no yes
Wood and Kleb (2003) yes N/A N/A N/A N/A yes no N/A yes N/A yes no
Bagnara et al. (2013) yes N/A N/A yes no yes no N/A N/A yes no yes
Hannay et al. (2009) yes yes no N/A N/A yes no yes N/A N/A yes yes
Hatton (1997) yes N/A N/A no no yes yes N/A N/A yes no no
Hatton and Roberts (1994) yes N/A N/A no no yes yes N/A N/A yes no no
Meinke and Niu (2010) yes N/A N/A yes no yes no N/A N/A yes no yes
Wood and Kleb (2003) yes N/A N/A N/A N/A yes no N/A yes N/A no yes
G1: Are the study aims clearly stated?
S1: Was the method for collecting the sample data specified?
S2, E1: Is there a control group?
E2: Were the treatments randomly allocated?
G2: Are the data collection methods adequately described?
G3: Was there any statistical assessment of results?
S3: Do the observations support the claims?
C1, E3: Is there enough evidence provided to support the claims?
G4: Are threats to validity and/or limitations reported?
G5: Can the study be replicated?
Table 8: Quality assessment results of quantitative studies
Ref. No. A B C D E
Post and Kendall (2004) yes yes yes yes yes
Reupke et al. (1988) yes yes no yes no
Segal (2005) yes yes no yes no
Morris and Segal (2009) yes yes yes yes no
Ackroyd et al. (2008) yes yes yes yes no
Kelly et al. (2011c) yes yes no yes no
Kelly et al. (2011b) yes yes yes yes no
Smith et al. (2004) yes yes no yes no
Heroux et al. (2007) yes yes yes yes no
L.S. Chin and Greenough (2007) yes yes no yes no
Easterbrook (2010) yes yes no yes no
Kelly and Sanders (2008) yes yes yes yes no
Sanders and Kelly (2008b) yes yes yes yes no
Clune and Rood (2011) yes yes no yes no
Dubois (2012) yes yes yes yes yes
Smith (2007) yes yes yes yes no
Lane and Gobet (2012) yes yes yes yes no
Murphy et al. (2008) yes yes yes yes yes
Hochstein and Basili (March) yes yes yes yes yes
Mayer et al. (2005) yes yes yes yes yes
Davis and Weyuker (1981) yes yes no yes no
Segal (2008b) yes yes yes yes yes
Segal (2009a) yes yes yes yes yes
Carver et al. (2007) yes yes yes yes no
Segal (2007) yes yes yes yes no
Murphy et al. (2007a) yes yes yes yes yes
Heroux and Willenbring (2009) yes yes yes yes no
Kane et al. (2006) yes yes yes yes no
Farrell et al. (2011) yes yes yes yes no
Kelly et al. (2009) yes yes no yes no
Morris (2008) yes yes yes yes yes
Easterbrook and Johns (2009) yes yes yes yes yes
Eddins (2009) yes yes no no no
Sanders and Kelly (2008a) yes yes yes yes yes
Segal (2009b) yes yes yes yes no
Segal (2008a) yes yes yes yes no
Sletholt et al. (2012) yes yes yes yes yes
Weyuker (1982) yes yes no no no
A: Are the study aims clearly stated?
B: Does the evaluation address its stated aims and purpose?
C: Is sample design/target selection of cases/documents defined?
D: Is enough evidence provided to support the claims?
E: Can the study be replicated?
Table 9: Quality assessment results of qualitative studies

2.3 Reporting the review

Data extracted from the 62 primary papers were used to formulate answers to the four research questions given in Section 2.1.1. We closely followed guidelines provided by Kitchenham Kitchenham (2004) when preparing the SLR report.

3 Results

We use the selected primary papers to provide answers to the research questions.

3.1 RQ1: How is scientific software defined in the literature?

Scientific software is defined in various ways. Sanders et al. Sanders and Kelly (2008a) use the definition provided by Kreyman et al. Kreyman et al. (1999): “Scientific software is software with a large computational component and provides data for decision support.” Kelly et al. identified two types of scientific software Kelly et al. (2011b):

(1)

End user application software that is written to achieve scientific objectives (e.g., Climate models).

(2)

Tools that support writing code that express a scientific model and the execution of scientific code (e.g., Automated software testing tool for MATLAB Eddins (2009)).

An orthogonal classification is given by Carver et al. Carver and Hochstein (2011):

(1)

Research software written with the goal of publishing papers.

(2)

Production software written for real users (e.g. Climate models).

Scientific software is developed by scientists themselves or by multi-disciplinary teams, where a team consists of scientists and professional software developers. A scientist will generally be the person in charge of a scientific software development project Morris and Segal (2009).

We encountered software that helps to solve a variety of scientific problems. We present the details of software functionality, size and the programing languages in Table 10. None of the primary studies reported the complexity of the software in terms of measurable unit such as coupling, cohesion, or cyclomatic complexity.

Ref. No. Description Programing language Size
Reupke et al. (1988) Medical software (e.g. software for blood chemistry analyzer and medical image processing system) N/S N/S
Post and Kendall (2004) Nuclear weapons simulation software FORTRAN 500 KLOC
Drake et al. (2005); Easterbrook (2010) Climate modeling software N/S N/S
Segal (2005) Embedded software for spacecrafts N/S N/S
Morris and Segal (2009) Software developed for space scientists and biologists N/S N/S
Ackroyd et al. (2008) Control and data acquisition software for Synchrotron Radiation Source (SRS) experiment stations Java N/S
Murphy et al. (2011) Health care simulation software(e.g. discreet event simulation engine and insulin titration algorithm simulation) Java, MATLAB N/S
Murphy et al. (2007b) Machine learning ranking algorithm implementations Perl, C N/S
Pipitone and Easterbrook (2012) Climate modeling software FORTRAN, C 400 KLOC
Kelly et al. (2011c) Astronomy software package MATLAB, C++ 10 KLOC
Abackerli et al. (2010)

Software packages providing uncertainty estimates for tri-dimensional measurements

N/S N/S
Smith et al. (2004) Implementation of a time dependent simulation of a complex physical system N/S N/S
Dahlgren and Devanbu (2005) Implementation of scientific mesh traversal algorithms N/S 38-50 LOC
Heroux et al. (2007) Implementations of parallel solver algorithms and libraries for large scale, complex, multi physics engineering and scientific applications N/S N/S
Pitt-Francis et al. (2008) Software for cardiac modeling in computational biology C++, Python 50 KLOC
Farrell et al. (2011) Numerical simulations in geophysical fluid dynamics N/S N/S
Remmel et al. (2012) Program for solving partial differential equations C++ 250 KLOC
Clune and Rood (2011) Calculates the trajectory of the billions of air particles in the atmosphere C++ N/S
Clune and Rood (2011) Implementation of a numerical model that simulates the growth of virtual snow flakes C++ N/S
Dahlgren (2007) Implementations of mesh traversal algorithms N/S N/S
Mayer and Guderlei (2006) Image processing application N/S N/S
Chen et al. (2009) Bioinformatics program for analyzing and simulating gene regulatory networks and mapping short sequence reads to a reference genome N/S N/S
Murphy et al. (2007b, a) Implementations of machine learning algorithms N/S N/S
Hochstein and Basili (March) Simulations in solid mechanics, fluid mechanics and combustion C, C++, FORTRAN 100-500 KLOC
Wood and Kleb (2003) Program to evaluate the performance of a numerical scheme to solve a model advection-diffusion problem Ruby 2.5 KLOC
Mayer et al. (2005) Implementation of dilation of binary images N/S N/S
Segal (2009a) Infrastructure software for the structural protein community N/S N/S
Carver et al. (2007) Performance prediction software for a product that otherwise requires large, expensive and potentially dangerous empirical tests for performance evaluation FORTRAN, C 405 KLOC
Carver et al. (2007) Provide computational predictions to analyze the manufacturing process of composite material products C++, C 134 KLOC
Carver et al. (2007) Simulation of material behavior when placed under extreme stress FORTRAN 200 KLOC
Carver et al. (2007) Provide real-time processing of sensor data C++, MATLAB 100 KLOC
Carver et al. (2007) Calculate the properties of molecules using computational quantum mechanical models FORTRAN 750 KLOC
Bagnara et al. (2013) Program for avoiding collisions in unmanned aircrafts C N/S
Heroux and Willenbring (2009) Numerical libraries to be used by computational science and engineering software projects N/S N/S
Table 10: Details of scientific software listed in primary studies

3.2 RQ2: Are there special characteristics or faults in scientific software or its development that make testing difficult?

We found characteristics that fall into two main categories 1) Testing challenges that occur due to characteristics of scientific software, and 2) Testing challenges that occur due to cultural differences between scientists and the software engineering community. Below we describe these challenges:

  1. Testing challenges that occur due to characteristics of scientific software: These challenges can be further categorized according to the specific testing activities where they pose problems.

    1. Challenges concerning test case development:

      1. Identifying critical input domain boundaries a priori is difficult due to the complexity of the software, round-off error effects, and complex computational behavior. This makes it difficult to apply techniques such as equivalence partitioning to reduce the number of test cases Sanders and Kelly (2008b); Kelly et al. (2011a); Carver et al. (2007).

      2. Manually selecting a sufficient set of test cases is challenging due to the large number of input parameters and values accepted by some scientific software Vilkomir et al. (2008).

      3. When testing scientific frameworks at the system level, it is difficult to choose a suitable set of test cases from the large number of available possibilities  Remmel et al. (2012).

      4. Some scientific software lacks real world data that can be used for testing Murphy et al. (2007b).

      5. Execution of some paths in scientific software are dependent on results of floating point calculations. Finding test data to execute such program paths is challenging Bagnara et al. (2013).

      6. Some program units (functions, subroutines, methods) in scientific software contain so many decisions that testing is impractical Morris (2008).

      7. Difficulties in replicating the physical context where the scientific code is suppose to work can make comprehensive testing impossible Segal (2005).

    2. Challenges towards producing expected test case output values (Oracle problems): Software testing requires an oracle, a mechanism for checking whether the program under test produces the expected output when executed using a set of test cases. Obtaining reliable oracles for scientific programs is challenging Sanders and Kelly (2008a). Due to the lack of suitable oracles it is difficult to detect subtle faults in scientific code Kelly et al. (2009). The following characteristics of scientific software make it challenging to create a test oracle:

      1. Some scientific software is written to find answers that are previously unknown. Therefore only approximate solutions might be available Easterbrook (2010); Murphy et al. (2011); Weyuker (1982); Carver et al. (2007); Kelly et al. (2011b).

      2. It is difficult to determine the correct output for software written to test scientific theory that involves complex calculations or simulations. Further, some programs produce complex outputs making it difficult to determine the expected output Sletholt et al. (2012); Sanders and Kelly (2008a); Murphy et al. (2007a); Chen et al. (2009); Kelly and Sanders (2008); Pitt-Francis et al. (2008); Hannay et al. (2009); Weyuker (1982); Segal (2008b).

      3. Due to the inherent uncertainties in models, some scientific programs do not give a single correct answer for a given set of inputs. This makes determining the expected behavior of the software a difficult task, which may depend on a domain expert’s opinion Abackerli et al. (2010).

      4. Requirements are unclear or uncertain up-front due to the exploratory nature of the software. Therefore developing oracles based on requirements is not commonly done Sletholt et al. (2012); Nguyen-Hoan et al. (2010); Hannay et al. (2009); Heroux et al. (2007).

      5. Choosing suitable tolerances for an oracle when testing numerical programs is difficult due to the involvement of complex floating point computations Pitt-Francis et al. (2008); Kelly et al. (2011a, c); Clune and Rood (2011).

    3. Challenges towards test execution:

      1. Due to long execution times of some scientific software, running a large number of test cases to satisfy specific coverage criteria is not feasible Kelly et al. (2011a).

    4. Challenges towards test result interpretation:

      1. Faults can be masked by round-off errors, truncation errors and model simplifications Kelly et al. (2011a); Hatton and Roberts (1994); Hannay et al. (2009); Chen et al. (2002); Clune and Rood (2011).

      2. A limited portion of the software is regularly used. Therefore, less frequently used portions of the code may contain unacknowledged errors Pipitone and Easterbrook (2012); L.S. Chin and Greenough (2007).

      3. Scientific programs contain a high percentage of duplicated code Morris (2008).

  2. Testing challenges that occur due to cultural differences between scientists and the software engineering community: Scientists generally play leading roles in developing scientific software.

    1. Challenges due to limited understanding of testing concepts:

      1. Scientists view the code and the model that it implements as inseparable entities. Therefore they test the code to assess the model and not necessarily to check for faults in the code Kelly and Sanders (2008); L.S. Chin and Greenough (2007); Sanders and Kelly (2008b, a).

      2. Scientist developers focus on the scientific results rather than the quality of the software Easterbrook and Johns (2009); Carver et al. (2007).

      3. The value of the software is underestimated Segal (2008b).

      4. Definitions of verification and validation are not consistent across the computational science and engineering communities Hook and Kelly (2009).

      5. Developers (scientists) have little or no training in software engineering Easterbrook (2010); Easterbrook and Johns (2009); Hannay et al. (2009); Carver et al. (2007); Carver and Hochstein (2011).

      6. Requirements and software evaluation activities are not clearly defined for scientific software Segal (2008a, 2009a).

      7. Testing is done only with respect to the initial specific scientific problem addressed by the code. Therefore the reliability of results when applied to a different problem cannot be guaranteed Morris and Segal (2009).

      8. Developers are unfamiliar with testing methods Eddins (2009); Hannay et al. (2009).

    2. Challenges due to limited understanding of testing process

      1. Management and budgetary support for testing may not be provided Nguyen-Hoan et al. (2010); Heroux et al. (2007); Segal (2009a).

      2. Since the requirements are not known up front, scientists may adopt an agile philosophy for development. However, they do not use standard agile process models Easterbrook and Johns (2009). As a result, unit testing and acceptance testing are not carried out properly.

      3. Software development is treated as a secondary activity resulting in a lack of recognition for the skills and knowledge required for software development Segal (2007).

      4. Scientific software does not usually have a set of written or agreed set of quality goals Morris (2008).

      5. Often only ad-hoc or unsystematic testing methods are used Sanders and Kelly (2008a); Segal (2007).

      6. Developers view testing as a task that should be done late during software development Heroux and Willenbring (2009).

    3. Challenges due to not applying known testing methods

      1. The wide use of FORTRAN in the scientific community makes it difficult to utilize many testing tools from the software engineering community L.S. Chin and Greenough (2007); Sanders and Kelly (2008b); Easterbrook and Johns (2009).

      2. Unit testing is not commonly conducted when developing scientific software Wood and Kleb (2003); Dubois (2012). For example, Clune et al. find that unit testing is almost non-existent in the climate modeling community Clune and Rood (2011). Reasons for the lack of unit testing include the following:

        • There are misconceptions about the difficulty and benefits of implementing unit tests among scientific software developers Clune and Rood (2011).

        • The legacy nature of scientific code makes implementing unit tests challenging Clune and Rood (2011).

        • The internal code structure is hidden Smith et al. (2004).

        • The importance of unit testing is not appreciated by scientist developers Segal (2009b).

      3. Scientific software developers are unaware of the need for and the method of applying verification testing Sanders and Kelly (2008a).

      4. There is a lack of automated regression and acceptance testing in some scientific programs Carver and Hochstein (2011).

The following specific faults are reported in the selected primary studies:

  • Incorrect use of a variable name Chen et al. (2002).

  • Incorrectly reporting hardware failures as faults due to ignored exceptions Morris (2008).

  • One-off errors Hatton (1997).

3.3 RQ3: Can we use existing testing methods (or adapt them) to test scientific software effectively?

Use of testing at different abstraction levels and for different testing purposes. Several primary studies reported conducting testing at different abstraction levels: unit testing, integration testing and system testing. In addition some studies reported the use of acceptance testing and regression testing. Out of the 62 primary studies, 12 studies applied at least one of these testing methods. Figure 1 shows the percentage of studies that applied each testing method out of the 12 studies. Unit testing was the most common testing method reported among the 12 studies.

Figure 1: Percentage of studies that applied different testing methods

Figure 2 displays the percentage of the number of testing methods applied by the 12 studies. None of the studies applied four or more testing methods. Out of the 12 studies, 8 (67%) mention applying only one testing method. Below we describe how these testing methods were applied when testing scientific software:

Figure 2: Number of testing methods applied by the studies
  1. Unit testing: Several studies report that unit testing was used to test scientific programs Kane et al. (2006); Farrell et al. (2011); Drake et al. (2005); Ackroyd et al. (2008); Kelly et al. (2011c); Lane and Gobet (2012). Clune et al. describe the use of refactoring to extract testable units when conducting unit testing on legacy code Clune and Rood (2011). They identified two faults using unit testing that could not be discovered by system testing. Only two studies used a unit testing framework to apply automated unit testing Kane et al. (2006); Ackroyd et al. (2008) and both of these studies used JUnit222http://junit.org/. In addition, Eddins Eddins (2009) developed a unit testing framework for MATLAB. We did not find evidence of the use of any other unit testing frameworks.

  2. Integration testing: We found only one study that applied integration testing to ensure that all components work together as expected Drake et al. (2005).

  3. System testing: Several studies report the use of system testing Kane et al. (2006); Farrell et al. (2011); Reupke et al. (1988). In particular, the climate modeling community makes heavy use of system testing Clune and Rood (2011).

  4. Acceptance testing: We found only one study that reports on acceptance testing conducted by the users to ensure that programmers have correctly implemented the required functionality Kane et al. (2006). One reason acceptance testing is rarely used is that the scientists who are developing the software are often also the users.

  5. Regression testing: Several studies describe the use of regression testing to compare the current output to previous outputs to identify faults introduced when the code is modified Farrell et al. (2011); Drake et al. (2005); Hochstein and Basili (March); Smith et al. (2004). Further, Smith developed a tool for assisting regression testing Smith (2007). This tool allows testers to specify the variable values to be compared and tolerances for comparisons.

Techniques used to overcome oracle problems. Previously we described several techniques used to test programs that do not have oracles Kanewala and Bieman (2013b). In addition, several studies propose techniques to alleviate the oracle problem:

  1. A pseudo oracle is an independently developed program that fulfills the same specification as the program under test Abackerli et al. (2010); Nguyen-Hoan et al. (2010); Farrell et al. (2011); Easterbrook and Johns (2009); Post and Kendall (2004); Sanders and Kelly (2008a); Weyuker (1982); Davis and Weyuker (1981); Hatton (1997). For example, Murphy et al. used pseudo oracles for testing a machine learning algorithm Murphy et al. (2007a).
    Limitations: A pseudo oracle may not include some special features/treatments available in the program under test and it is difficult to decide whether the oracle or the program is faulty when the answers do not agree Chen et al. (2002). Pseudo oracles make the assumption that independently developed reference models will not result in the same failures. But Brilliant et al. found that even independently developed programs might produce the same failures Brilliant et al. (1990).

  2. Solutions obtained analytically can serve as oracles. Using analytical solutions is sometimes preferred over pseudo oracles since they can identify common algorithmic errors among the implementations. For example, a theoretically calculated rate of convergence can be compared with the rate produced by the code to check for faults in the program Abackerli et al. (2010); Kelly and Sanders (2008); Farrell et al. (2011).
    Limitations: Analytical solutions may not be available for every application Chen et al. (2002) and may not be accurate due to human errors Sanders and Kelly (2008a).

  3. Experimentally obtained results can be used as oracles Abackerli et al. (2010); Kelly and Sanders (2008); Nguyen-Hoan et al. (2010); Post and Kendall (2004); Sanders and Kelly (2008a); Lane and Gobet (2012).
    Limitations: It is difficult to determine whether an error is due to a fault in the code or due to an error made during the model creation Chen et al. (2002). In some situations experiments cannot be conducted due to high cost, legal or safety issues Carver et al. (2007).

  4. Measurements values obtained from natural events can be used as oracles.
    Limitations: Measurements may not be accurate and are usually limited due to the high cost or danger involved in obtaining them Kelly and Sanders (2008); Sanders and Kelly (2008b).

  5. Using the professional judgment of scientists Sanders and Kelly (2008b); Kelly et al. (2011c); Hook and Kelly (2009); Sanders and Kelly (2008a)
    Limitations: Scientists can miss faults due to misinterpretations and lack of data. In addition, some faults can produce small changes in the output that might be difficult to identify Hook and Kelly (2009). Further, the scientist may not provide objective judgments Sanders and Kelly (2008a).

  6. Using simplified data that so the correctness can be determined easily Weyuker (1982).
    Limitations: It is not sufficient to test using only simple data; simple test cases may not uncover faults such as round-off problems, truncation errors, overflow conditions, etc Hochstein and Basili (March). Further such tests do not represent how the code is actually used Sanders and Kelly (2008a).

  7. Statistical oracle: verifies statistical characteristics of test results Mayer et al. (2005).
    Limitations: Decisions by a statistical oracle may not always be correct. Further a statistical oracle cannot decide whether a single test case has passed or failed Mayer et al. (2005).

  8. Reference data sets: Cox et al. created reference data sets based on the functional specification of the program that can be used for black-box testing of scientific programs Cox and Harris (1999).
    Limitations: When using reference data sets, it is difficult to determine whether the error is due to using unsuitable equations or due to a fault in the code.

  9. Metamorphic testing (MT) was introduced by Chen et al. Chen et al. (1998) as a way to test programs that do not have oracles. MT operates by checking whether a program under test behaves according to an expected set of properties known as metamorphic relations. A metamorphic relation specifies how a particular change to the input of the program should change the output. MT was used for testing scientific applications in different areas such as machine learning applications Xie et al. (2011); Murphy et al. (2008), bioinformatics programs Chen et al. (2009), programs solving partial differential equations Chen et al. (2002) and image processing applications Mayer and Guderlei (2006). When testing programs solving partial differential equations, MT uncovered faults that cannot be uncovered by special value testing Chen et al. (2002). MT can be applied to perform both unit testing and system testing. Murphy et al. developed a supporting framework for conducting metamorphic testing at the function level Murphy et al. (2009). They used the Java Modeling Language (JML) for specifying the metamorphic relations and automatically generating test code using the provided specifications. Statistical Metamorphic testing (SMT) is a technique for testing non-deterministic programs that lack oracles Guderlei and Mayer (2007). Guderlei et al.

    applied SMT for testing an implementation of the inverse cumulative distribution function of the normal distribution 

    Guderlei and Mayer (2007). Further, SMT was applied for testing non-deterministic health care simulation software Murphy et al. (2011) and a stochastic optimization program Yoo (2010).
    Limitations: Enumerating a set of metamorphic relations that should be satisfied by a program is a critical initial task in applying metamorphic testing. A tester or developer has to manually identify metamorphic relations using her knowledge of the program under test; this manual process can miss some important metamorphic relations that could reveal faults. Recently we proposed a novel technique based on machine learning for automatically detecting metamorphic relations Kanewala and Bieman (2013a).

As noted in Section 3.2, selecting suitable tolerances for oracles is another challenge. Kelly et al. experimentally found that reducing the tolerance in an oracle increases the ability to detect faults in the code Kelly et al. (2011a). Clune et al. found that breaking the algorithm into small steps and testing the steps independently reduced the compounding effects of truncation and round-off errors Clune and Rood (2011).

Test case creation and selection. Several methods can help to overcome the challenges in test case creation and selection:

  1. Hook et al. found that many faults can be identified by a small number of test cases that push the boundaries of the computation represented by the code Hook and Kelly (2009). Following this, Kelly et al. found that random tests combined with specially designed test cases to cover the parts of code uncovered by the random tests are effective in identifying faults Kelly et al. (2011a). Both of these studies used MATLAB functions in their experiments.

  2. Randomly generated test cases were used with metamorphic testing to automate the testing of image processing applications Mayer and Guderlei (2006).

  3. Vilkomir et al. developed a method for automatically generating test cases when a scientific program has many input parameters with dependencies Vilkomir et al. (2008). Vilkomir et al.

    represent the input space as a directed graph. Input parameters are represented by the nodes in the graph. Specific values of the parameters and the probability of a parameter taking that value are represented by arcs. Dependencies among input parameter values are handled by splitting/merging nodes. This method creates a model which satisfies the probability law of Markov chains. Valid test cases can be automatically generated by taking a path in this directed graph. This model also provides the ability to generate random and weighted test cases according to the likelihood of taking the parameter values.

  4. Bagnara et al. used symbolic execution to generate test data for floating point programs Bagnara et al. (2013). This method generates test data to traverse program paths that involve floating point computations.

  5. Meinke et al. developed a technique for Automatic test case generation for numerical software based on learning based testing (LBT) Meinke and Niu (2010). The authors first created a polynomial model as an abstraction of the program under test. Then the test cases are generated by applying a satisfiability algorithm to the learned model.

  6. Parameterized random data generation is a technique described by Murphy et al. Murphy et al. (2007b) for creating test data for machine learning applications. This method randomly generates data sets using properties of equivalence classes.

  7. Remmel et al. developed a regression testing framework for a complex scientific framework Remmel et al. (2012). They took a software product line engineering (SPLE) approach to handle the large variability of the scientific framework. They developed a variability model to represent this variability and used the model to derive test cases while making sure necessary variant combinations are covered. This approach requires that scientists help to identify infeasible combinations.

Test coverage information. Only two primary studies mention the use of some type of test coverage information Kane et al. (2006); Ackroyd et al. (2008). Kane et al. found that while some developers were interested in measuring statement coverage, most of the developers were interested in covering the significant functionality of the program Kane et al. (2006). Ackroyd et al. Ackroyd et al. (2008) used the Emma tool to measure test coverage.

Assertion checking. Assertion checking can be used to ensure the correctness of plug-and-play scientific components. But assertion checking introduces a performance overhead. Dahlgren et al. developed an assertion selection system to reduce performance overhead for scientific software Dahlgren and Devanbu (2005); Dahlgren (2007).

Software development process. Several studies reported that using agile practices for developing scientific software improved testing activities Sletholt et al. (2012); Pitt-Francis et al. (2008); Wood and Kleb (2003). Some projects have used test-driven development (TDD), where test are written to check the functionality before the code is written. But adopting this approach could be a cultural challenge since primary studies report that TDD can delay the initial development of functional code Heroux and Willenbring (2009); Ackroyd et al. (2008).

3.4 RQ4: Are there any challenges that could not be answered by existing techniques?

Only one primary paper directly provided answers to RQ4. Kelly et al. Kelly et al. (2011b) describes oracle problems as key problems to solve and the need for research on performing effective testing without oracles. We did not find other answers to this research question.

4 Discussion

4.1 Principal findings

The goal of this systematic literature review is to identify specific challenges faced when testing scientific software, how the challenges have been met, and any unsolved challenges. The principal findings of this review are the following:

  1. The main challenges in testing scientific software can be grouped into two main categories.

    • Testing challenges that occur due to characteristics of scientific software.

      • Challenges concerning test case development such as a lack of real world data and difficulties in replicating the physical context where the scientific code is suppose to work.

      • Oracle problems mainly arise because scientific programs are either written to find answers that are previously unknown or they perform complex calculations so that it is difficult to determine the correct output. 30% of the primary studies reported the oracle problems as challenges for conducting testing.

      • Challenges towards test execution such as difficulties in running test cases to satisfy a coverage criteria due to long execution times.

      • Challenges towards test result interpretation such as round-off errors, truncation errors and model simplifications masking faults in the code.

    • Testing challenges that occur due to cultural differences between scientists and the software engineering community.

      • Challenges due to limited understanding of testing concepts such as viewing the code and the model that it implements as inseparable entities.

      • Challenges due to limited understanding of testing processes resulting in the use of ad-hoc or unsystematic testing methods.

      • Challenges due to not applying known testing methods such as unit testing.

  2. We discovered how certain techniques can be used to overcome some of the testing challenges posed by scientific software development.

    • Pseudo oracles, analytical solutions, experimental results, measurement values, simplified data and professional judgment are widely used as solutions to oracle problems in scientific software. But we found no empirical studies evaluating the effectiveness of these techniques in detecting subtle faults. New techniques such as metamorphic testing have been applied and evaluated for testing scientific software in research studies. But we found no evidence that such techniques are actually used in practice.

    • Traditional techniques such as random test case generation were applied to test scientific software after applying modifications to consider equivalence classes. In addition, studies report the use of specific techniques to perform automatic test case generation for floating point programs. These techniques were only applied to a narrow set of programs. The applicability of these techniques in practice needs to be investigated.

    • When considering unit, system, integration, acceptance and regression testing, very few studies applied more than one type of testing to their programs. We found no studies that applied more than three of these testing techniques.

    • Only two primary studies evaluated some type of test coverage information during the testing process.

  3. Research from the software engineering community can help to improve the testing process, by investigating how to perform effective testing for programs with oracle problems.

4.2 Techniques potentially useful in scientific software testing

Oracle problems are key problems to solve. Research on performing effective testing without oracles is needed Kelly et al. (2011b). Techniques such as property based testing and data redundancy can be used when an oracle is not available Ammann and Offutt (2008). Assertions can be used to perform property based testing within the source code Kanewala and Bieman (2013b). Another potential approach is to use a golden run Lemos and Martins (2012). With a golden run, an execution trace is generated during a failure free execution of an input. Then this execution trace is compared with execution traces obtained when executing the program with the same input when a failure is observed. By comparing the golden run and the faulty execution traces the robustness of the program is determined. One may also apply model based testing, but model based testing requires well-defined and stable requirements to develop the model. But with most scientific software, requirements are constantly changing, which can make it difficult to apply model based testing. We did not find applications of property based testing, data redundancy, golden run, and model based testing to test scientific software in the primary studies. In addition, research on test case selection and test data adequacy has not considered the effect of the oracle used. Often perfect oracles are not available for scientific programs. Therefore developing test selection/creation techniques that consider the characteristics of the oracle used for testing will be useful.

Metamorphic testing is a promising testing technique to address the oracle problem. Metamorphic testing can be used to perform both unit and system testing. But identifying metamorphic relations that should be satisfied by a program is challenging. Therefore techniques that can identify metamorphic relations for a program are needed Kanewala and Bieman (2013a).

Only a few studies applied new techniques developed by the software engineering community to overcome some of the common testing challenges. For example none of the primary studies employ test selection techniques to select test cases, even though running a large number of test cases is difficult due to the long execution times of scientific software. But many test selection techniques assume a perfect oracle, and thus will not work well for most scientific programs.

Several studies report that scientific software developers used regression testing during the development process. But we could not determine if regression testing was automated or whether any test case prioritizing techniques were used. In addition we only found two studies that used unit testing frameworks to conduct unit testing. Both of these studies report the use of the JUnit framework for Java programs. None of the primary studies report information regarding how unit testing was conducted for programs written in other languages.

One of the challenges of testing scientific programs is duplicated code. Even though a fault is fixed in a single location, the same fault may exist in other locations and those faults can go undetected when duplicated code is present. Automatic clone detection techniques would be useful to find duplicated code especially when dealing with legacy code.

4.3 Strengths and weaknesses of the SLR

Primary studies that provided the relevant information for this literature review were identified thorough a key word based search on three databases. The search found relevant studies published in journals, conference proceedings, and technical reports. We used a systematic approach, including the detailed inclusion/exclusion criteria given in Table 1 to select the relevant primary studies. Initially both authors applied the study selection process to a subset of the results returned by the key word based search. After verifying that both authors selected the same set of studies, the first author applied the study selection process to the rest of the results returned by the key word based search.

In addition we examined the reference lists of the selected primary studies to identify any additional studies that relate to our search focus. We found 13 additional studies related to our search focus. These studies were found by the key word based search, but did not pass the title based filtering. This indicates that selecting studies based on the title alone may not be reliable and to improve the reliability we might have to review the abstract, key words and conclusions before excluding them. This process would be time consuming due to the large number of results returned by the key word based search. After selecting the primary studies, we used data extraction forms to extract the relevant information consistently while reducing bias. Extracted information was validated by both authors.

We used the quality assessment questions given in Table 2 and Table 3 for assessing the quality of the selected primary studies. All selected primary studies are of high quality. The primary studies are a mixture of observational and experimental studies.

One weakness is the reliance on the key word based search facilities provided by the three databases for selecting the initial set of papers. We cannot ensure that the search facilities return all relevant studies. But, the search process independently returned all the studies that we previously knew as relevant to our research questions.

Many primary studies were published in venues that are not related to software engineering. Therefore, there may be solutions provided by the software engineering community for some of the challenges presented in Section 3.2 such as oracle problems. But we did not find evidence of wide use of such solutions by the scientific software developers.

4.4 Contribution to research and practice community

To our knowledge, this is the first systematic literature review conducted to identify software testing challenges, proposed solutions, and unsolved problems in scientific software testing. We identified challenges in testing scientific software using a large number of studies. We outlined the solutions used by the practitioners to overcome those challenges as well as unique solutions that were proposed to overcome specific problems. In addition we identified several unsolved problems.

Our work may contribute to focusing research efforts aiming at the improvement of testing of scientific software. This SLR will help the scientists who are developing software to identify specific testing challenges and potential solutions to overcome those challenges. In addition scientist developers can become aware of their cultural differences with the software engineering community that can impact software testing. Information provided here will help scientific software developers to carry out systematic testing and thereby improve the quality of scientific software. Further, there are many opportunities for software engineering research to find solutions to solve the challenges identified by this systematic literature review.

5 Conclusion and future work

Conducting testing to identify faults in the code is an important task in scientific software development that has received little attention. In this paper we present the results of a systematic literature review that identifies specific challenges faced when testing scientific software, how the challenges have been met and any unsolved challenges.. Below we summarize the answers to our four research questions:

RQ1: How is scientific software defined in literature? Scientific software is defined as software with a large computational component. Further, scientific software is usually developed by multidisciplinary teams made up of scientists and software developers.

RQ2: Are there special characteristics or faults in scientific software or its development that make testing difficult? We identified two categories of challenges in scientific software testing: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and model as inseparable entities.

RQ3: Can we use existing testing methods (or adapt them) to test scientific software effectively? A number of studies report on testing at different levels of abstraction such as unit testing, system testing and integration testing in scientific software development. Few studies report the use of unit testing frameworks. Many studies report the use of a pseudo oracle or experimental results to alleviate the lack of an oracle. In addition, several case studies report using metamorphic testing to test programs that do not have oracles. Several studies developed techniques to overcome challenges in test case creation. These techniques include the combination of randomly generated test cases with specially designed test cases, generating test cases by considering dependencies among input parameters, and using symbolic execution to generate test data for floating point programs. Only two studies use test coverage information.

RQ4: Are there challenges that could not be met by existing techniques? Oracle problems are prevalent and need further attention.

Scientific software poses special challenges for testing. Some of these challenges can be overcome by applying testing techniques commonly used by software engineers. Scientist developers need to incorporate these testing techniques into their software development process. Some of the challenges are unique due to characteristics of scientific software, such as oracle problems. Software engineers need to consider these special challenges when developing testing techniques for scientific software.

6 Acknowledgments

This project is supported by Award Number 1R01GM096192 from the National Institute of General Medical Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of General Medical Sciences or the National Institutes of Health. We thank the reviewers for their insightful comments on earlier versions of this paper.

References

  • Abackerli et al. (2010) Abackerli, A. J., Pereira, P. H., Calônego Jr., N., 03 2010. A case study on testing CMM uncertainty simulation software (VCMM). Journal of the Brazilian Society of Mechanical Sciences and Engineering 32, 8 – 14.
  • Ackroyd et al. (2008) Ackroyd, K., Kinder, S., Mant, G., Miller, M., Ramsdale, C., Stephenson, P., July-Aug. 2008. Scientific software development at a research facility. Software, IEEE 25 (4), 44 –51.
  • Afzal et al. (2009) Afzal, W., Torkar, R., Feldt, R., 2009. A systematic review of search-based testing for non-functional system properties. Information and Software Technology 51 (6), 957 – 976.
  • Ammann and Offutt (2008) Ammann, P., Offutt, J., 2008. Introduction to Software Testing, 1st Edition. Cambridge University Press, New York, NY, USA.
  • Bagnara et al. (2013) Bagnara, R., Carlier, M., Gori, R., Gotlieb, A., 2013. Symbolic path-oriented test data generation for floating-point programs. In: Software Testing, Verification and Validation (ICST), 2013 IEEE Sixth International Conference on. pp. 1–10.
  • Brilliant et al. (1990) Brilliant, S., Knight, J., Leveson, N., 1990. Analysis of faults in an n-version software experiment. Software Engineering, IEEE Transactions on 16 (2), 238–247.
  • Carver et al. (2007) Carver, J., Kendall, R. P., Squires, S. E., Post, D. E., 2007. Software development environments for scientific and engineering software: A series of case studies. In: Proceedings of the 29th International Conference on Software Engineering. ICSE ’07. IEEE Computer Society, Washington, DC, USA, pp. 550–559.
  • Carver and Hochstein (2011) Carver, Jeffrey, R. B. D. H., Hochstein, L., 2011. What scientists and engineers think they know about software engineering: A survey. Tech. Rep. SAND2011-2196, Sandia National Laboratories.
  • Chen et al. (2002) Chen, T., Feng, J., Tse, T. H., 2002. Metamorphic testing of programs on partial differential equations: a case study. In: Computer Software and Applications Conference, 2002. COMPSAC 2002. Proceedings. 26th Annual International. pp. 327–333.
  • Chen et al. (1998) Chen, T. Y., Cheung, S. C., Yiu, S. M., 1998. Metamorphic testing: a new approach for generating next test cases. Tech. Rep. HKUST-CS98-01, Department of Computer Science, Hong Kong University of Science and Technology, Hong Kong.
  • Chen et al. (2009) Chen, T. Y., Ho, J. W. K., Liu, H., Xie, X., 2009. An innovative approach for testing bioinformatics programs using metamorphic testing. BMC Bioinformatics 10.
  • Clune and Rood (2011) Clune, T., Rood, R., nov.-dec. 2011. Software testing and verification in climate model development. Software, IEEE 28 (6), 49 –55.
  • Cox and Harris (1999) Cox, M., Harris, P., 1999. Design and use of reference data sets for testing scientific software. Analytica Chimica Acta 380 (2–3), 339 – 351.
  • Dahlgren (2007) Dahlgren, T., 2007. Performance-driven interface contract enforcement for scientific components. In: Schmidt, H., Crnkovic, I., Heineman, G., Stafford, J. (Eds.), Component-Based Software Engineering. Vol. 4608 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 157–172.
  • Dahlgren and Devanbu (2005) Dahlgren, T. L., Devanbu, P. T., 2005. Improving scientific software component quality through assertions. In: Proceedings of the Second International Workshop on Software Engineering for High Performance Computing System Applications. SE-HPCS ’05. ACM, New York, NY, USA, pp. 73–77.
  • Davis and Weyuker (1981) Davis, M. D., Weyuker, E. J., 1981. Pseudo-oracles for non-testable programs. In: Proceedings of the ACM ’81 conference. ACM ’81. ACM, New York, NY, USA, pp. 254–257.
  • Drake et al. (2005) Drake, J. B., Jones, P. W., Carr, Jr., G. R., Aug. 2005. Overview of the software design of the community climate system model. International Journal of High Performance Computing Applications 19 (3), 177–186.
  • Dubois (2012) Dubois, P., July-Aug. 2012. Testing scientific programs. Computing in Science & Engineering 14 (4), 69 –73.
  • Dyba et al. (2007) Dyba, T., Dingsoyr, T., Hanssen, G., Sept 2007. Applying systematic reviews to diverse study types: An experience report. In: Empirical Software Engineering and Measurement, 2007. ESEM 2007. First International Symposium on. pp. 225–234.
  • Easterbrook (2010) Easterbrook, S. M., 2010. Climate change: a grand software challenge. In: Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research. FoSER ’10. ACM, New York, NY, USA, pp. 99–104.
  • Easterbrook and Johns (2009) Easterbrook, S. M., Johns, T. C., Nov.-Dec. 2009. Engineering the software for understanding climate change. Computing in Science & Engineering 11 (6), 65 –74.
  • Eddins (2009) Eddins, S. L., 2009. Automated software testing for matlab. Computing in Science & Engineering 11 (6), 48–55.
  • Engström et al. (2010) Engström, E., Runeson, P., Skoglund, M., 2010. A systematic review on regression test selection techniques. Information and Software Technology 52 (1), 14 – 30.
  • Farrell et al. (2011) Farrell, P. E., Piggott, M. D., Gorman, G. J., Ham, D. A., Wilson, C. R., Bond, T. M., 2011. Automated continuous verification for numerical simulation. Geoscientific Model Development 4 (2), 435–449.
  • Guderlei and Mayer (2007)

    Guderlei, R., Mayer, J., October 2007. Statistical metamorphic testing testing programs with random output by means of statistical hypothesis tests and metamorphic testing. In: Quality Software, 2007. QSIC ’07. Seventh International Conference on. pp. 404 –409.

  • Hannay et al. (2009) Hannay, J. E., MacLeod, C., Singer, J., Langtangen, H. P., Pfahl, D., Wilson, G., 2009. How do scientists develop and use scientific software? In: Proceedings of the 2009 ICSE Workshop on Software Engineering for Computational Science and Engineering. SECSE ’09. IEEE Computer Society, Washington, DC, USA, pp. 1–8.
  • Hatton (1997) Hatton, L., Apr-Jun 1997. The T experiments: errors in scientific software. Computational Science & Engineering, IEEE 4 (2), 27 –38.
  • Hatton and Roberts (1994) Hatton, L., Roberts, A., oct 1994. How accurate is scientific software? Software Engineering, IEEE Transactions on 20 (10), 785 –797.
  • Heroux and Willenbring (2009) Heroux, M., Willenbring, J., 2009. Barely sufficient software engineering: 10 practices to improve your CSE software. In: Software Engineering for Computational Science and Engineering, 2009. SECSE ’09. ICSE Workshop on. pp. 15–21.
  • Heroux et al. (2007) Heroux, M. A., Willenbring, J. M., Phenow, M. N., feb. 2007. Improving the development process for CSE software. In: Parallel, Distributed and Network-Based Processing, 2007. PDP ’07. 15th EUROMICRO International Conference on. pp. 11 –17.
  • Hochstein and Basili (March) Hochstein, L., Basili, V., March. The asc-alliance projects: A case study of large-scale parallel scientific code development. Computer 41 (3), 50–58.
  • Hook and Kelly (2009) Hook, D., Kelly, D., may 2009. Testing for trustworthiness in scientific software. In: Software Engineering for Computational Science and Engineering, 2009. SECSE ’09. ICSE Workshop on. pp. 59 –64.
  • Kane et al. (2006) Kane, D. W., Hohman, M. M., Cerami, E. G., McCormick, M. W., Kuhlmman, K. F., Byrd, J. A., 2006. Agile methods in biomedical software development: a multi-site experience report. BMC Bioinformatics 7, 273.
  • Kanewala and Bieman (2013a) Kanewala, U., Bieman, J., Nov 2013a. Using machine learning techniques to detect metamorphic relations for programs without test oracles. In: Software Reliability Engineering (ISSRE), 2013 IEEE 24th International Symposium on. pp. 1–10.
  • Kanewala and Bieman (2013b) Kanewala, U., Bieman, J. M., 2013b. Techniques for testing scientific programs without an oracle. In: Proc. 5th International Workshop on Software Engineering for Computational Science and Engineering. IEEE, pp. 48–57.
  • Kelly et al. (2011a) Kelly, D., Gray, R., Shao, Y., 2011a. Examining random and designed tests to detect code mistakes in scientific software. Journal of Computational Science 2 (1), 47 – 56.
  • Kelly et al. (2009) Kelly, D., Hook, D., Sanders, R., Sept.-Oct. 2009. Five recommended practices for computational scientists who write software. Computing in Science & Engineering 11 (5), 48 –53.
  • Kelly and Sanders (2008) Kelly, D., Sanders, R., 2008. Assessing the quality of scientific software. In: First International Workshop on Software Engineering for Computational Science and Engineering.
  • Kelly et al. (2011b) Kelly, D., Smith, S., Meng, N., Sept.-Oct. 2011b. Software engineering for scientists. Computing in Science & Engineering 13 (5), 7 –11.
  • Kelly et al. (2011c) Kelly, D., Thorsteinson, S., Hook, D., May-June 2011c. Scientific software testing: Analysis with four dimensions. Software, IEEE 28 (3), 84 –90.
  • Kitchenham (2004) Kitchenham, B., 2004. Procedures for performing systematic reviews. Technical report, Keele University and NICTA.
  • Kitchenham et al. (2009) Kitchenham, B., Brereton, O. P., Budgen, D., Turner, M., Bailey, J., Linkman, S., 2009. Systematic literature reviews in software engineering – a systematic literature review. Information and Software Technology 51 (1), 7 – 15.
  • Kitchenham and Charters (2007) Kitchenham, B., Charters, S., 2007. Guidelines for performing systematic literature reviews in software engineering (version 2.3). Technical report, Keele University and University of Durham.
  • Kreyman et al. (1999) Kreyman, K., Parnas, D. L., Qiao, S., 1999. Inspection procedures for critical programs that model physical phenomena. CRL Report no. 368, McMaster University.
  • Lane and Gobet (2012) Lane, P. C., Gobet, F., 2012. A theory-driven testing methodology for developing scientific software. Journal of Experimental & Theoretical Artificial Intelligence 24 (4), 421–456.
  • Lemos and Martins (2012) Lemos, G., Martins, E., June 2012. Specification-guided golden run for analysis of robustness testing results. In: Software Security and Reliability (SERE), 2012 IEEE Sixth International Conference on. pp. 157–166.
  • L.S. Chin and Greenough (2007) L.S. Chin, D. W., Greenough, C., 2007. A survey of software testing tools for computational science. Tech. Rep. RAL-TR-2007-010, Rutherford Appleton Laboratory.
  • Mayer and Guderlei (2006) Mayer, J., Guderlei, R., oct. 2006. On random testing of image processing applications. In: Quality Software, 2006. QSIC 2006. Sixth International Conference on. pp. 85 –92.
  • Mayer et al. (2005) Mayer, J., Informationsverarbeitung, A. A., Ulm, U., 2005. On testing image processing applications with statistical methods. In: In Software Engineering (SE 2005), Lecture Notes in Informatics. pp. 69–78.
  • Meinke and Niu (2010) Meinke, K., Niu, F., 2010. A learning-based approach to unit testing of numerical software. In: Petrenko, A., Simão, A., Maldonado, J. (Eds.), Testing Software and Systems. Vol. 6435 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 221–235.
  • Miller (2006) Miller, G., 2006. A scientist’s nightmare: Software problem leads to five retractions. Science 314 (5807), 1856–1857.
  • Morris (2008) Morris, C., 2008. Some lessons learned reviewing scientific code. In: Proc. First International Workshop on Software Engineering for Computational Science and Engineering.
  • Morris and Segal (2009) Morris, C., Segal, J., dec. 2009. Some challenges facing scientific software developers: The case of molecular biology. In: e-Science, 2009. e-Science ’09. Fifth IEEE International Conference on. pp. 216 –222.
  • Murphy et al. (2007a) Murphy, C., Kaiser, G., Arias, M., 2007a. An approach to software testing of machine learning applications. In: 19th International Conference on Software Engineering and Knowledge Engineering (SEKE). pp. 167–172.
  • Murphy et al. (2007b) Murphy, C., Kaiser, G., Arias, M., 2007b. Parameterizing random test data according to equivalence classes. In: Proceedings of the 2nd international workshop on Random testing: co-located with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007). RT ’07. ACM, New York, NY, USA, pp. 38–41.
  • Murphy et al. (2008) Murphy, C., Kaiser, G., Hu, L., Wu, L., July 2008. Properties of machine learning applications for use in metamorphic testing. In: Proc. of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE). pp. 867–872.
  • Murphy et al. (2011) Murphy, C., Raunak, M. S., King, A., Chen, S., Imbriano, C., Kaiser, G., Lee, I., Sokolsky, O., Clarke, L., Osterweil, L., 2011. On effective testing of health care simulation software. In: Proceedings of the 3rd Workshop on Software Engineering in Health Care. SEHC ’11. ACM, New York, NY, USA, pp. 40–47.
  • Murphy et al. (2009) Murphy, C., Shen, K., Kaiser, G., 2009. Using JML runtime assertion checking to automate metamorphic testing in applications without test oracles. In: Proceedings of the 2009 International Conference on Software Testing Verification and Validation. ICST ’09. IEEE Computer Society, Washington, DC, USA, pp. 436–445.
  • Nguyen-Hoan et al. (2010) Nguyen-Hoan, L., Flint, S., Sankaranarayana, R., 2010. A survey of scientific software development. In: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ESEM ’10. ACM, New York, NY, USA, pp. 12:1–12:10.
  • Pipitone and Easterbrook (2012) Pipitone, J., Easterbrook, S., 2012. Assessing climate model software quality: a defect density analysis of three models. Geoscientific Model Development 5 (4), 1009–1022.
  • Pitt-Francis et al. (2008) Pitt-Francis, J., Bernabeu, M. O., Cooper, J., Garny, A., Momtahan, L., Osborne, J., Pathmanathan, P., Rodriguez, B., Whiteley, J. P., Gavaghan, D. J., 2008. Chaste: using agile programming techniques to develop computational biology software. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366 (1878), 3111–3136.
  • Post and Kendall (2004) Post, D. E., Kendall, R. P., Winter 2004. Software project management and quality engineering practices for complex, coupled multiphysics, massively parallel computational simulations: Lessons learned from ASCI. International Journal of High Performance Computing Applications 18 (4), 399–416.
  • Remmel et al. (2012) Remmel, H., Paech, B., Bastian, P., Engwer, C., March-April 2012. System testing a scientific framework using a regression-test environment. Computing in Science & Engineering 14 (2), 38 –45.
  • Reupke et al. (1988) Reupke, W., Srinivasan, E., Rigterink, P., Card, D., Jun 1988. The need for a rigorous development and testing methodology for medical software. In: Engineering of Computer-Based Medical Systems, 1988., Proceedings of the Symposium on the. pp. 15 –20.
  • Sanders and Kelly (2008a) Sanders, R., Kelly, D., July 2008a. The challenge of testing scientific software. In: Proceedings Conference for the Association for Software Testing (CAST). Toronto, pp. 30–36.
  • Sanders and Kelly (2008b) Sanders, R., Kelly, D., July-Aug. 2008b. Dealing with risk in scientific software development. Software, IEEE 25 (4), 21 –28.
  • Segal (2005) Segal, J., 2005. When software engineers met research scientists: A case study. Empirical Software Engineering 10, 517–536.
  • Segal (2007) Segal, J., 2007. Some problems of professional end user developers. In: Visual Languages and Human-Centric Computing, 2007. VL/HCC 2007. IEEE Symposium on. pp. 111–118.
  • Segal (2008a) Segal, J., 2008a. Models of scientific software development. In: 2008 Workshop Software Eng. in Computational Science and Eng. (SECSE 08).
  • Segal (2008b) Segal, J., 2008b. Scientists and software engineers: A tale of two cultures. In: Buckley, J., Rooksby, J., Bednarik, R. (Eds.), PPIG 2008: Proceedings of the 20th Annual Meeting of the Pschology of Programming Interest Group. Lancaster University, Lancaster, UK, proceedings: 20th annual meeting of the Psychology of Programming Interest Group; Lancaster, United Kingdom; September 10-12 2008.
  • Segal (2009a) Segal, J., December/Winter 2009a. Software development cultures and cooperation problems: a field study of the early stages of development of software for a scientific community. Computer Supported Cooperative Work 18 (5-6), 581–606.
  • Segal (2009b) Segal, J., may 2009b. Some challenges facing software engineers developing software for scientists. In: Software Engineering for Computational Science and Engineering, 2009. SECSE ’09. ICSE Workshop on. pp. 9 –14.
  • Sletholt et al. (2012) Sletholt, M., Hannay, J., Pfahl, D., Langtangen, H., March 2012. What do we know about scientific software development’s agile practices? Computing in Science & Engineering 14 (2), 24–37.
  • Smith (2007) Smith, B., 2007. A test harness th for numerical applications and libraries. In: Gaffney, P., Pool, J. (Eds.), Grid-Based Problem Solving Environments. Vol. 239 of IFIP The International Federation for Information Processing. Springer US, pp. 227–241.
  • Smith et al. (2004) Smith, M. C., Kelsey, R. L., Riese, J. M., Young, G. A., Aug. 2004. Creating a flexible environment for testing scientific software. In: Trevisani, D. A., Sisti, A. F. (Eds.), Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Vol. 5423 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. pp. 288–296.
  • Vilkomir et al. (2008) Vilkomir, S. A., Swain, W. T., Poore, J. H., Clarno, K. T., 2008. Modeling input space for testing scientific computational software: A case study. In: Proceedings of the 8th international conference on Computational Science, Part III. ICCS ’08. Springer-Verlag, Berlin, Heidelberg, pp. 291–300.
  • Walia and Carver (2009) Walia, G. S., Carver, J. C., 2009. A systematic literature review to identify and classify software requirement errors. Information and Software Technology 51 (7), 1087 – 1109.
  • Weyuker (1982) Weyuker, E. J., 1982. On testing non-testable programs. The Computer Journal 25 (4), 465–470.
  • Wood and Kleb (2003) Wood, W., Kleb, W., 2003. Exploring xp for scientific research. Software, IEEE 20 (3), 30–36.
  • Xie et al. (2011) Xie, X., Ho, J. W., Murphy, C., Kaiser, G., Xu, B., Chen, T. Y., 2011. Testing and validating machine learning classifiers by metamorphic testing. Journal of Systems and Software 84 (4), 544 – 558.
  • Yoo (2010) Yoo, S., 2010. Metamorphic testing of stochastic optimisation. In: Software Testing, Verification, and Validation Workshops (ICSTW), 2010 Third International Conference on. pp. 192–201.