Morphy: A Datamorphic Software Test Automation Tool

12/20/2019 ∙ by Hong Zhu, et al. ∙ Nanjing University 0

This paper presents an automated tool called Morphy for datamorphic testing. It classifies software test artefacts into test entities and test morphisms, which are mappings on testing entities. In addition to datamorphisms, metamorphisms and seed test case makers, Morphy also employs a set of other test morphisms including test case metrics and filters, test set metrics and filters, test result analysers and test executers to realise test automation. In particular, basic testing activities can be automated by invoking test morphisms. Test strategies can be realised as complex combinations of test morphisms. Test processes can be automated by recording, editing and playing test scripts that invoke test morphisms and strategies. Three types of test strategies have been implemented in Morphy: datamorphism combination strategies, cluster border exploration strategies and strategies for test set optimisation via genetic algorithms. This paper focuses on the datamorphism combination strategies by giving their definitions and implementation algorithms. The paper also illustrates their uses for testing both traditional software and AI applications with three case studies.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Case Studies

In this section, we report three case studies on the uses of Morphy in automated software testing. 222The source code of the case studies can be found on GitHub at the URL:

1.1 Triangle Classification

Triangle classification is a classic software testing problem that Myer used to illustrate the importance of combination of various types of test cases [20]. The program under test “reads three integer values from an input dialog. The three values represent the lengths of the sides of a triangle. The program displays a message that states whether the triangle is scalene, isosceles, or equilateral.” [20] Myer listed 14 questions for testers to assess how well he/she tests the program for such a seemly simple program and reported that “highly qualified professional programmers score, on the average, only 7.8 out of a possible 14”. Here, it is used to demonstrate how to design datamorphisms according to Myer’s assessment criteria to achieve test adequacy and how datamorphic testing method automates the testing process.

1.1.1 The input and output datatypes

Slightly different from the original problem that Mayer specified, the program under test is implemented as a Java class with three attributes , and of integer type and a method classify to classify the triangle into four types: , , , . Therefore, the input data type is a class called Triangle with two methods: toString() and valueOf(), to convert between the object and string representations of a test case. A Java enum class TriangleType is declared as the types of triangles. It is the output datatype of test cases.

1.1.2 Generation of seed test cases

There are a number of ways to generate seed test cases in the datamorphic approach. This case study shows how each of them can be easily implemented in a Morphy test specification.

A. Literal Constants. The first is to generate seed test cases as hard coded literal constants and save them into the test pool. The following method in the TriangleTestSpec class is a seed maker morphism that generates 4 seed test cases, one in each typical type of triangles.

  public void makeSeeds(){
    testSuite.addInput(new Triangle(5,5,5));
    testSuite.addInput(new Triangle(5,5,7));
    testSuite.addInput(new Triangle(5,7,9));
    testSuite.addInput(new Triangle(3,5,9));

B. Constants with Expected Output. Expected outputs on seed test cases can also be stored and results of test executions can be checked against the expected outputs. To do so, an additional test pool can be declared to store the expected answers to the seed test cases. The following is a segment of the code in Java.

  public TestPool<Triangle,TriangleType> expected
     = new TestPool<Triangle, TriangleType>();

  public void makeSeedsWithExpectedOutput(){
    Triangle trg;
    TestCase<Triangle, TriangleType> tc;

    trg = new Triangle(5,5,5);
    tc = new TestCase<Triangle, TriangleType>();
    tc.input = trg;
    tc.output = TriangleType.equilateral;

A metamorphism can be defined to check the correctness of test executions on seed test cases. The following is such an example.

    message="Match the expected output."
  public boolean matchExpected(
       TestCase<Triangle, TriangleType> x) {
    String tcId =;
    TestCase expectedTc = expected.get(tcId);
    return (x.output == expectedTc.output);

C. Manual Input. Seed test cases can be entered into the system through interactive manual input. The following is such an example code.

  public void manualInputTestCases(){
    Triangle trg;
    TestCase<Triangle, TriangleType> tc;
    String numStr;
    while (true) {
      tc = new TestCase<Triangle,TriangleType>();
      trg = new Triangle();
      numStr = JOptionPane.showInputDialog(
        "Please input x:");
      if (numStr==null) { break;}
      trg.x  = Integer.valueOf(numStr);
      numStr = JOptionPane.showInputDialog(
        "Please input y:");
      if (numStr==null) { break;}
      trg.y  = Integer.valueOf(numStr);
      numStr = JOptionPane.showInputDialog(
        "Please input z:");
      if (numStr==null) { break;}
      trg.z  = Integer.valueOf(numStr);
      tc.input = trg;

D. Read Test Cases from A File. Test cases can also read from a file and then added to the test suite. For the sake of space, here the code is omitted.

1.1.3 Datamorphisms

The main body of the test specification consists of a set of datamorphisms and metamorphisms. The datamorphisms implement Mayer s test requirements. For example, Mayer requires that a test set contains all permutations of the three input values so a set of datamorphisms have been designed to generate mutants that are the permutations of the seed test cases. The following is one such datamorphism.

  public TestCase<Triangle, TriangleType> swapXY(
      TestCase<Triangle, TriangleType> seed){
    TestCase<Triangle, TriangleType> mutant
      = new TestCase<Triangle, TriangleType>();
    Triangle m = new Triangle(1,1,1);
    mutant.input = m;
    return mutant;

Table 1 lists the datamorphisms contained in the test specification. As shown in the above example, these datamorphisms are very simple Java code; each is no more than 10 lines.

Name Function
increaseX Increase the value of x by 1
increaseY Increase the value of y by 1
increaseZ Increase the value of z by 1
decreaseX Decrease the value of x by 1
decreaseY Decrease the value of y by 1
decreaseZ Decrease the value of z by 1
swapXY Swap the values of x and y
swapXZ Swap the values of x and z
swapYZ Swap the values of y and z
rotateL Rotate the values of x, y and z left
rotateR Rotate the values of x, y and z right
copyXToY Copy the value of x to y
copyXToZ Copy the value of x to y
copyYToZ Copy the value of y to z
negateX Negate the value of x
negateY Negate the value of y
negateZ Negate the value of z
zeroX Set the value of x to 0
zeroY Set of value of y to 0
zeroZ Set of value of z to 0
Table 2: List of Datamorphisms

Applying these datamorphisms with the first order mutant complete strategy generated 80 mutant test cases, which together with the seed test cases achieved the full coverage of Myer’s test requirements.

1.1.4 Metamorphisms

For each datamorphism, there is a corresponding metamorphism that makes an assertion about the expected output of the program on mutant test cases. For example, for datamorphism , which swaps the values of and of a triangle, the following metamorphism asserts that such a swap will not change the classification outcome.

      applicableDatamorphism = "swapXY",
      message="Failed the Swap X Y rule."
  public boolean swapXYRule(
      TestCase<Triangle, TriangleType> x) {
    String originalId = x.getOrigins().get(0);
    TestCase origTc=testSuite.get(originalId);
    return (origTc.output == x.output);

It is worth noting that annotatations restrict the mutant type to which a metamorphism can be applied. For example, the above metamorphism only applies to mutant test cases that are generated by applying the datamorphism.

The metamorphisms in this case study are also very simply; each has no more than 13 lines.

1.1.5 Test Execution and Result Analysis

The test executions of the program under test can easily be defined by a test executer morphism.

public TriangleType Classifier(Triangle tc) {
  Triangle1 x = new Triangle1(tc.x, tc.y, tc.z);
  return x.Classify();

The analysis of test results can be performed by invoking an analyser morphism. In the case study, an analyser method is written for statistical analysis of test cases and it reports the data in a pop-up. The details are omitted for the sake of space.

Note that test entities and morphisms can be declared in a number of classes to make them more reusable. For example, in this case study, we have put the test executer method in a separate class that inherits the class , where the datamorphisms and metamorphisms are declared. Consequently, the test specification class can be reused to test a number of different implementations and thier versions even if their interface is different. In the case study, we tested two different algorithms for triangle classification, and for each of them, we made two versions, one with errors and one without. The relationships between various classes used in the case study is depicted in Figure 6.

Figure 6: Classes of Test Specification

Using such an organisation for test specifications not only reuses the classes, but also make test scripts reusable without change for testing various different implementations.

1.2 Trigonometric Functions

In this case study, we test three trigonometric functions , and provided by Java Math library. The correctness of the library s implementation of these functions will be checked against a set of trigonometric identities and on a set of special values between 0 and as well as random test cases.

1.2.1 Test Cases and Test Suite

The inputs to these trigonometric functions are real numbers and so are the results. However, instead of using or as the output datatype of test cases, we declare a class called that contains three attributes , and to store the values of these functions and use it as the output datatype. This enables us to check the functions on identities that involves multiple trigonometric functions.

Two seed maker morphisms are included in the test specification. One generates 20 random inputs in the range from 0 to , and the other generates a set of special values and the expected output of the functions; see Table 3.

Table 3: The Special Values of Trigonometric Functions Used as Special Test Cases

1.2.2 Datamorphisms and Metamorphisms

The datamorphisms for testing trigonometric functions are very simple functions on real numbers; see Table 4. A set of identities of trigonometric functions are implemented as metamorphisms; see Table 5.

Name Function Name Function
halfPiPlus halfPiMinus
piPlus piMinus
twoPiPlus twoPiMinus
sum diff
Table 4: List of Datamorphisms
Table 5: List of Metamorphisms

The implementations of metamorphisms are straightforward. The following is an example.

  double error = 0.000000000001;
      message="The rule: sin(pi/2-x) = cos(x)"
  public boolean HalfPiMinusSinAssertion(
       TestCase<Double, Trigonometrics> tc) {
    TestCase<Double, Trigonometrics> originalTc
       = testSuite.get(tc.getOrigins().get(0));
    return (Math.abs(tc.output.sin
      - originalTc.output.cos) <= error);

1.2.3 Test Executions and Analysis of Results

The execution of the program on test cases is defined by the following test executer method, which invokes the program under test and stores the output of the program to the test case.

  public Trigonometrics testMath(Double x) {
    Trigonometrics result = new Trigonometrics();
    result.sin = Math.sin(x);
    result.cos = Math.cos(x);
    result.tan = Math.tan(x);
    return result;

Two analyser methods were used: one for statistical analysis of the test results, reusing the analyser for testing triangle classification program, and the other for visualising the test outputs.

The testing process consists of two stages. The first stage starts with the generation of special value test cases, executions of the functions on them and then using a metamorphism to check whether the output matches the expected value. The analyse of the test results shows that the test detected no error.

The second stage starts by generating 20 random test cases in the range of 0 .. , then applies the datamorphism and then . Then the other datamorphisms are applied to populate the test set. The program under test is then executed on the test cases, checked against the metamorphisms, and test results are analysed by invoking the analysers. As shown in Figure 7, the test on random inputs detected a number of errors. The error rate is .

Figure 7: Trigonometric Test Results

Morphy produces an error message when a test case fails a metamorphism. The following is an example of error message in testing the trigonometric functions.

-- Rule: tan(pi/2+x) = -1/tan(x) on test case:

The error messages produced by Morphy show that the errors all happened when the value of tan(x) function is used to checking an identity. Two types of errors have occurred: (a) the value of is infinite, e.g. when ; (b) the accuracy of an expression is lower than the allowed error, which is .

Face recognition has been employed in the case study on datamorphic testing reported in [29, 30]

to demonstrate that datamorphisms can produce meaningful test data for machine learning applications. In this paper, we demonstrate how to achieve test automation and reuses of test code by using Morphy.

1.2.4 Reuses of Test Data

The test data for testing a face recognition application are images of sizes more than 100 KB. In [29, 30], 200 images of different persons were used and each generated 13 mutants using GAN-Attr [13] to alter the facial features. Each image is used to test 4 different face recognition applications. A similar experiment was also conducted to test 5 different face recognition applications [19]. The generation of mutant images from the original images is time consuming. Thus, reusing test data is beneficial.

To enable the reuse of mutant images, the original images and the mutant images are stored in different folders while they have the same file names. Thus, the image input to a face recognition can be represented by a path to the image file in the test case. The datamorphisms can be written simply as manipulation of strings that takes the path to the original image file and produce that of the mutant image file. The output of a face recognition is a real number expressing the similarity between these two images.

The following shows the seed maker and one of the datamorphisms for the experiments reported in [30, 19]. It randomly selects a set of original images. Each test case uses one original image as both the target and subject images. A datamorphism replaces the subject image with a mutant image. Executing a face recognition application on such a mutant test case examines whether the application recognises the person in mutant images as the same person in the target image.

  public void randomSelectOriginalImages(){
    String numStr = JOptionPane.showInputDialog(
      "Entering the number of test cases
       or \"all\":");
    String[] fileList;
    if (numStr==null) { return;}
    if (numStr.equals("all")) {
      fileList = getFileListAll();
    }else {
      int numTestCases = Integer.valueOf(numStr);
      fileList = getFileList(numTestCases);
    for (String name: fileList) {
      TwoImage input = new TwoImage(
        image+"\\"+name, image+"\\"+name);

Two face recognition experiments are reported in [29, 19]. We also designed a third experiment which checks if images of different persons can be distinguished by using both the original images and the mutant images. The following seed maker generates a number of test cases that each test cases consists of two original images selected at random.

  public void randomSelectImagePairs(){
    String numStr = JOptionPane.showInputDialog(
      "Entering the number of test cases:");
    if (numStr==null) { return;};
    int numTestCases = Integer.valueOf(numStr);
    String[] fileList = getFileListAll();
    Random rand = new Random();
    for (int i=0; i<numTestCases; i++) {
      int x = rand.nextInt(fileList.length);
      int y = rand.nextInt(fileList.length);
      String name1 = fileList[x];
      String name2 = fileList[y];
      TwoImage input = new TwoImage(
        image+"\\"+name1, image+"\\"+ name2);

The same set of datamorphisms is applied to these seed test cases using the first order mutant complete strategy. However, the result set of test cases tested a face recognition application on a different property; that is, whether it can distinguish the different persons when using mutant images if using the original images can. Both experiments can be repeated many times by using the seed makers and the datamorphisms without actually regenerating the mutant images many times. In other words, the datamorphisms and the image data are reused for two different types of experiments.

1.2.5 Test Executions and Analysis of Results

Although the applications under test above have different formats of invocation and return messages, the seed makers, datamorphisms as well as other test morphisms can be reused without any change. The only differences are in the test executers. For each face recognition application, a test executer is written to convert a test case object into the invocation request message and then calls the API. It also receives the return message and converts it into a real number representing the confidence in the face recognition. For example, the following invokes face recognition online service.

  public Double baiduTest(TwoImage images) {
    ApiTesting apiTesting = new BaiduApi();
    String cmd = images.getCmd();
    try {
      String result = apiTesting.similarity(
        cmd.split(" ")[0], cmd.split(" ")[1]);
      return Double.valueOf(result) / 100;
    } catch (Exception e) {
      return 0.0;

The face recognition software

is an open source project. The code is written in C++. The project is cloned from the GitHub and installed to the local machine where testing is performed. The invocation of the software is through executing a shell command. The following test executer implements the invocation of


  public Double seetaFaceTest(TwoImage images) {
    String dir = ImageConfig.seetaFacePath;
    String cmd = dir + File.separator
      + "Identification.exe " + images.getCmd();
    try {
      Process proc
        = Runtime.getRuntime().exec(cmd);
      BufferedReader stdout = new BufferedReader(
        new InputStreamReader(
      String result = stdout.readLine();
      return Double.valueOf(result);
    } catch (IOException e) {
      return 0.0;

Two analysers were written to analyse the results of the tests. The following calculates the average of the scores on mutant images for each type of mutant and displays the results on screen. The other saves the data to a file.

public void viewStatistics() {
  int numTC = testSuite.testSet.size();
  int numOriginalTC = 0;
  int numMutantTC = 0;
  HashMap<String, List<Double>> resMutant
    = new HashMap<>();
  for (TestCase<TwoImage, Double> x :
      testSuite.testSet) {
    if (x.feature == TestDataFeature.original) {
    }else {
      if (!resMutant.keySet().
          contains(x.getType())) {
           new ArrayList<Double>());
  String message = "Statistics:\n";
  message += "Total number of test cases = "
      + numTC + "\n";
  message += "Number of original test cases = "
      + numOriginalTC + "\n";
  message += "Number of mutant test cases = "
      + numMutantTC + "\n";
  for (String type: resMutant.keySet()) {
    Double avg = resMutant.get(type).stream()
    message += " -- "+ type+" avg = "+avg+"\n";
  JOptionPane.showMessageDialog(null, message);

Figure 8 gives the screen snapshots of the above analyser on two experiments with one face recognition application.

(a) Images of different persons        (b) Images of the same person

Figure 8: Face Recognition Test Results

1.2.6 Test Scripts

As a typical AI application, testing face recognition requires repeated experiments in order to obtain results that are statistically significant. Test scripts were written to further improve test automation for repeated experiments. The following is one of the test scripts written in the case study.

//Experiment with images of same person
//Testing face recognition
  Baidu, same person)

//Testing face recognition
  FacePlus, same person)

//Testing SeetaFace face recognition
  SeetaFace, same person)

//Testing face recognition
  Tencent, same person)
//Test script end


The following observation were made on the case studies.

First, test morphisms in the case studies are simple and easy to write; see Table 6, where TC stands for Triangle Classification, Trg for Trigonometric Function, and FR for Face Recognition. LOC is the lines of code.

Num of Classes 11 4 8
Total LOC 899 830 450
Num of Seed Makers 4 3 3
Average LOC of Seed Makers 26.25 61.67 21.33
Num of Datamorphisms 20 10 13
Average LOC of Datamorphisms 9 6 8
Num of Metamorphisms 25 30
Average LOC of Metamorphisms 8.72 7.00
Num of Analysers 2 2 2
Average LOC of Analysers 62 33 41
Table 6: Summary of Case Studies

Second, test specifications can be reusable if it is decomposed into a number of classes where common test morphisms are placed together while test specific morphisms are placed in separate classes.

Third, achieving test automation using facilities at three different levels of activity, strategy and process is flexible and practical. Different testing tools and techniques can be easily integrated into the Morphy framework and used together.

Conclusion Main Contributions This paper redefines the datamorphic testing method by classifying test artefacts into test entities and test morphisms. The notion of test morphisms is defined as mappings between software test entities. Datamorphisms and metamorphisms are examples of such test morphisms. The former are mappings from test cases to test cases, while the later are mapping from test cases to Boolean values as assertions on the correctness of the test. Seed makers are also test morphisms, which map into test sets. We identified a set of other test morphisms, which include test case metrics and filters, test set metrics and filters, test executers and analysers.

The paper also presents a test automation tool called . We demonstrated that testing activities can be automated by writing test codes for various test morphisms and invoking such test codes through a test tool like Morphy. Advanced combinations of test morphisms can be realised by implementing of test strategies to achieve a higher level of test automation. Three types of test strategies have been implemented in Morphy: (a) Datamorphism combination strategies, which generate test sets of various coverage of datamorphisms combinations; (b) Exploration strategies, which explore the test space in order to find the borders between subdomains for testing classification and clustering type of AI applications; (c) Test set optimisation strategies, which employ genetic algorithms optimise test sets. This paper focuses on the first of these, defining both strategies and implementation algorithms, with the other types of test strategies being reported separately.

Morphy also provides a record-replay type of test automation facility to further improve test automation especially for regression testing. Test scripts can be recorded from the interactive invocations of test morphisms for basic test activities and invocations of test strategies as well as test management activities such as load test specifications and load and save test sets, etc. Although the case studies reported in this paper used test scripts to improve test automation, a more detailed study of the test script facility will be reported in a separate paper.

Related Work

Morphy is a test automation framework applicable to all kinds of software systems including AI applications as demonstrated in the case studies. In comparison with existing test automation frameworks like XUnit [11, 18] and GUI test script based test tools like Selenium [23] and WebDriver [26], Morphy provides more advanced test automation facilities such as test strategies and test scripting.

In XUnit framework, testing is defined by a set of methods in a class or a set of test scripts for executing the program under test together with methods for setting up the environment before test executions and tearing down the environment after test. Such a test specification is imperative. Our test specification is declaratively imperative: various types of test morphisms are declared in a test specification while each test morphism is coded in an imperative programming language. Our case studies show that such test specifications are highly reusable and composable even for testing different applications. This is what existing test automation frameworks have not achieved.

In a GUI based test automation framework, test automation is realised by test scripts or test code that interact with GUI elements. The most representative and most well-known example of such testing tools is Selenium [23]. Selenium has two test environments: (a) the Selenium IDE in which manual GUI based testing can be recorded into a test script and replayed; (b) the Selenium Web Drivers, which provides an API with methods for web-based GUI elements. Test code can be written in various programming languages that are equivalent to test scripts. Morphy also employs test scripts to achieve a high level of test automation, but it is equipped with more advanced test automation facilities such as test strategies.

An advantage of Morphy is that the architecture enables various testing techniques and tools to be integrated by wrapping exiting testing tools as methods in test specification class that invoke the tools. For example, test case generation techniques and tools [1] like fuzz testing [25], data mutation testing [22], random testing [2], adaptive random testing [8, 17], combinatorial testing [21] and model based test case generators are all test morphisms, which can be wrapped as seed makers or datamorphisms. Metamorphic relations in metamorphic testing [7] and formal specification-based test oracles [4] are metamorphisms. Algebraic specification has been used for both generating test cases and checking test correctness [5, 6, 28]. The techniques that implements automated testing based on algebraic specifications [4, 15, 16] can easily be split into two parts, the test morphisms to generate test cases and the test morphisms to check test correctness. Test coverage measurement tools like [24] are test set metrics. Regression testing techniques and methods [27] that select or prioritise test cases in an existing test set can be implemented as test set filters. Search-based testing [12, 9] can be regarded as test strategies. Therefore, they can all be easily integrated into Morphy.

Future Work

It is worth noting that datamorphic testing focuses on test morphisms related to test data and test sets, as its name implies. There are other types of test morphisms. For example, mutation operators in mutation testing [14] and fault injection tools for other fault-based testing methods are test morphisms that are mappings from programs to programs or sets of programs. Specification mutation operators are test morphisms that mapping from formal specifications to specifications or sets of specifications. It is an interesting further research question how to integrate such test morphisms into the datamorphic testing tools like Morphy, although, theoretically speaking, there should be no significant difficulty to do so.

It is also possible to integrate XUnit test automation frameworks like JUnit and GUI based test automation tools like WebDriver with Morphy. This is also an interesting topic for future work.

We have already conducted some experiments with the exploratory strategies and genetic algorithms for test set optimisation strategies. The results will be reported in separate papers.


  • [1] S. Anand, E. K. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold, P. McMinn, A. Bertolino, J. J. Li, and H. Zhu. An orchestrated survey of methodologies for automated software test case generation. Journal of Systems and Software, 86(8):1978 – 2001, 2013.
  • [2] A. Arcuri, M. Z. Iqbal, and L. Briand. Random testing: Theoretical results and practical implications. IEEE Transactions on Software Engineering, 38(2):258–277, March 2012.
  • [3] Y. Bagge. Experiments with testing a bounded model checker for C. MSc Dissertation, School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford OX33 1HX, UK, Sept 2019.
  • [4] G. Bernot, M. C. Gaudel, and B. Marre. Software testing based on formal specifications: a theory and a tool. Software Engineering Journal, pages 387–405, Nov. 1991.
  • [5] H. Chen, T. Tse, and T. Chen. In black and white: an integrated approach to class-level testing of object-oriented programs. ACM TOSEM, 7(3):250–295, 1998.
  • [6] H. Chen, T. Tse, and T. Chen. Taccle: a methodology for object-oriented software testing at the class and cluster levels. ACM Transactions on Software Engineering and Methodology, 10(1):56–109, 2001.
  • [7] T. Y. Chen, F.-C. Kuo, H. Liu, P.-L. Poon, D. Towey, T. H. Tse, and Z. Q. Zhou. Metamorphic testing: A review of challenges and opportunities. ACM Comput. Surv., 51(1):4:1–4:27, Jan. 2018.
  • [8] T. Y. Chen, H. Leung, and I. K. Mak. Adaptive random testing. In Proc. of the 9th Asian Computing Science Conf., LNCS 3321, pages 320–329. Springer, 2004.
  • [9] M. Dave and R. Agrawal. Search based techniques and mutation analysis in automatic test case generation: A survey. In 2015 IEEE International Advance Computing Conference (IACC), pages 795–799, June 2015.
  • [10] A. Gotlieb, M. Roper, and P. Zhang, editors. Proceedings of The First IEEE International Conference On Artificial Intelligence Testing (AITest 2019). IEEE Computer Society, Los Alamitos, CA, USA, Apr 2019.
  • [11] P. Hamill. Unit Test Frameworks. O’Reilly, 2005.
  • [12] M. Harman, A. Mansouri, and Y. Zhang. Search based software engineering: Trends, techniques and applications. ACM Computing Surveys, 45(1):Article 11, 61 pages, Nov. 2012.
  • [13] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen. Attgan: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, 28(11):5464–5478, Nov 2019.
  • [14] Y. Jia and M. Harman. An analysis and survey of the development of mutation testing. IEEE Transactions on Software Engineering, 37(5):649–678, Sep. 2011.
  • [15] L. Kong, H. Zhu, and B. Zhou. Automated testing EJB components based on algebraic specifications. In COMPSAC (2), pages 717–722, 2007.
  • [16] D. Liu, X. Wu, X. Zhang, H. Zhu, and I. Bayley. Monic testing of web services based on algebraic specifications. In Proc. of the 10th IEEE International Conference on Service Oriented System Engineering (SOSE 2016), Oxford, England, UK, March 2016. IEEE Computer Society.
  • [17] Y. Liu and H. Zhu. An experimental evaluation of the reliability of adaptive random testing methods. In Proc. of The Second IEEE International Conference on Secure System Integration and Reliability Improvement (SSIRI 2008),, pages 24–31, Yokohama , Japan, July 2008.
  • [18] G. Meszarros. xUnit Test Patterns: Refactoring Test Code. Addison Wesley, 2007.
  • [19] S. Mugutdinov. Applying datamorphic technique to test face recognition applications. BSc dissertation, School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford, UK, March 2019.
  • [20] G. J. Myers. The Art of Software Testing. John Wiley and Sons, Inc., 2nd edition edition, 2004.
  • [21] C. Nie and H. Leung. A survey of combinatorial testing. ACM Comput. Surv., 43(2):11:1–11:29, Feb. 2011.
  • [22] L. Shan and H. Zhu. Generating structurally complex test cases by data mutation: A case study of testing an automated modelling tool. The Computer Journal, 52(5):571–588, Aug 2009.
  • [23] Software Freedom Conservancy. Selenium website. Online at URL:, Nov. 2019.
  • [24] Stackify. The ultimate list of code coverage tools. Online at URL:, May 2017.
  • [25] M. Sutton, A. Greene, and P. Amini. Fuzzing: Brute Force Vulnerability Discovery. Addison-Wesley, 2007.
  • [26] WWW Consortium. WebDriver: W3C Recommendation. Online at URL:, June 2018.
  • [27] S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: A survey. Softw. Test. Verif. Reliab., 22(2):67–120, Mar. 2012.
  • [28] H. Zhu. A note on test oracles and semantics of algebraic specifications. In Proc. of QSIC’03, pages 91–99, Dallas, USA, Nov. 2003.
  • [29] H. Zhu, D. Liu, I. Bayley, R. Harrison, and F. Cuzzolin. Datamorphic testing: A method for testing intelligent applications. In 2019 IEEE International Conference On Artificial Intelligence Testing (AITest 2019), pages 149–156, Los Alamitos, CA, USA, Apr 2019. IEEE Computer Society.
  • [30] H. Zhu, D. Liu, I. Ian Bayley, R. Harrison, and F. Cuzzolin. Datamorphic testing: A methodology for testing ai applications. Technical Report OBU-ECM-AFM-2018-02, School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford OX33 1HX, UK, Dec 2018.