Automatically Checking Conformance on Asynchronous Reactive Systems

05/22/2019 ∙ by Camila Sonada Gomes, et al. ∙ 0

Software testing is an important issue in software development process to ensure higher quality on the products. Formal methods has been promising on testing reactive systems, specially critical systems, where accuracy is mandatory since any fault can cause severe damage. Systems of this nature are characterized by receiving messages from the environment and producing outputs in response. One of the most challenges in model-based testing is the conformance checking of asynchronous reactive systems. The aim is to verify if an implementation is in compliance with its respective specification. In this work, we develop a practical tool to check conformance relation between reactive models using a more general theory based on regular languages. The approach, in fact, subsumes the classical conformance which is also available in our tool. In addition, we present some studies with different sceneries that are applied to practical tools with both notions of conformance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic testing tools have been proposed to support the development process of reactive systems that are characterized by the continuous interaction with the environment. In this setting, systems receive external stimuli and produce outputs, asynchronously, in response. In addition, systems of this nature are usually critical and require more accuracy in their development process, especially in the testing activity, where appropriate formalisms must be used as the basis [7, 8, 16]. Input Output Labeled Transition Systems (IOLTSs) [8, 16, 4, 15] are traditional formalisms usually applied to modeling and testing reactive systems.

In model-based testing IOLTS specification can model desirable and undesirable behaviors of an implementation under test (IUT). The aim is finding faults in an IUT according to a certain fault model [14, 7, 5] to show if requirements are or not satisfied regarding its respective system specification. A well-established conformance relation, called ioco  and proposed by Tretmans [14], requires that the outputs produced by an IUT must be also produced by its respective specification following the same behavior. A more recent and general conformance relation, proposed by Bonifacio and Moura [7], specifies desirable and undesirable behaviors using regular languages to define the testing fault model.

In this work, we address the development of an automatic tool for conformance verification of asynchronous reactive systems modeled by IOLTSs. JTorx [2], another tool from the literature, also implements a conformance testing verification process, but only based on the classical ioco  relation. Our tool comprises both the classical ioco  relation and also the more general conformance based on regular languages. Some practical sceneries are then run to evaluate aspects related to effectiveness and usability of both tools and both conformance theories.

We organize this paper as follows. Section 2 describes the conformance verification methods using regular languages and the ioco  relation. The practical tool which implements the more general method of conformance checking is presented in Section 3. Some applications and a comparative study are given in Section 4. Section 5 offers some concluding remarks.

2 Conformance Verification

The classical conformance relation ioco  establishes compliance between IUTs and specifications based on a fault model given by behaviors over a specific domain. Conformance testing verdicts are declared positive over ioco  relation if an output produced by an IUT is also specified in the specification after applying a sequence of input stimuli to it. In this context, we call by test case a sequence of input stimuli and by path a sequence of states induced by a test case over the formal model. Thus, when we apply a test suite to both a specification and an IUT, if for every test case the IUT produces outputs that are also defined in the specification, we say that the IUT conforms to the specification. Otherwise, we say that they do not conform [16].

A more general conformance relation has been proposed by Bonifacio and Moura [7], where the fault model is defined by regular languages. Basically, desirable and undesirable behaviors are specified by regular expressions, and , respectively. Given an implementation , a specification , and regular expressions and , we need to check if , where and denote the behaviors of and , respectively, and gives the complement of . We declare that an implementation is in conformance to a specification , if there is no test case of the test suite that is also a behavior of  [7]. As stated by the authors [7] the ioco  relation is, in fact, a particular case of the language-based conformance verification when the pair of languages is settled by e , where is the set of outputs. That is, if and only if, , and the more general approach gives the ioco  relation using the language-based algorithm [7].

3 A Testing Tool for Reactive Systems

In this work we have developed a testing tool to automatically checking conformance of reactive systems modeled by IOLTS models. Our tool supports the more general notion of conformance based on regular languages and also the classical notion of ioco  relation. Everest111EVEREST – conformancE Verification on tEsting ReactivE SysTems222Available in https://everest-tool.github.io/everest-site has been developed in Java [11] using Swing [12] library to provide an yielding and friendly usability experience through a graphical interface.

Some features available on Everest tool are: (i) check conformance based on regular languages and ioco  relation; (ii) describe desirable and undesirable behaviors using regular expressions; (iii) specify formal models in Aldebaran format [1]; (iv) generate test suites when non-conformance verdicts are obtained; (v) provide paths induced by test cases; and (vi) allow the graphical representation of the formal models.

The Everest tool development was arranged into four main modules as shown in Figure 1. The modules are given by rectangles and the data flow between them is denoted by the arrows. The input data and the output computing results are represented by ellipses.

Figure 1: Tool’s Architecture

The View module implements an intuitive graphical interface with three different views: configuration; ioco  conformance; and language-based conformance. In the first view we set the specification and implementation models, the model type (LTS or IOLTS) under test, and the partition into input and output labels in case of IOLTS models. In the ioco  and language-based conformance views we run both verification processes and also graphically inspect the models to ascertain the configuration information. After finishing the testing process if the IUT does not conform to the specification then a negative verdict is displayed together with the associated test cases. Otherwise, the tool displays a positive verdict of conformance. Note that we can provide regular expressions to represent the desirable behaviors and the fault properties in the language-based conformance view.

The IUT and specification models are validated by the Parser module where data structures are constructed to internally represent them. The Automaton Construction module transforms the LTS/IOLTS models into their respective finite automatons which, in turn, are used to construct the fault model together with the automatons obtained by means the regular languages. We remark that ordinary LTS models can be checked using the language-based conformance notion using only the notion of desirable and undesirable behaviors. In this case, we do not need to partition the alphabet into input and output labels, as required by IOLTS models and crucial for ioco  relation.

The Conformance Verification module provides all necessary operations over regular languages such as union, intersection, and complement [13]. This module also constructs the finite automaton that represents the complete test suite. Next the module performs the conformance verification process.

4 Practical Application

In this section we describe some practical testing scenarios applied to the Everest tool and briefly compare to JTorx tool. Let be the IOLTS specification of Figure LABEL:fig:especificacaoEC_IOCO and let and be implementation candidates as depicted in Figures LABEL:fig:imp-quase-iso-modificada and LABEL:fig:implementacao_EC_IOCO. Also let and be the input and output alphabets, respectively. All models here are deterministic [9, 10] but we remark that our tool also deals with nondeterministic models.

a

b

a

b, x

x

b

b

a
(a)

a

b

a

b, x

a

b

b,x

a
(b)

a

b

a

b, x

a,x

b

b

a
(c)
Figure 2: IOLTS Models

In the first scenario we check if IUT conforms to specification . Everest has returned a non-conformance verdict using ioco  relation and generated the test suite . The subset of test cases induces paths from to in and from to in , where the output is produced by but does not. Note that in is a quiescent state whence no output is defined on it. The subset induces paths to state in and in . In this case, the output is produced by IUT whereas produces , where is the quiescent label. That is, a fault is detected according to ioco  relation. We note that both tools modify the formal models in the presence of quiescence by adding self-loops labeled with  [14].

The same scenario has been also applied to JTorx tool, resulting in the same verdict, as expected, but it generates the test suite . Notice that the test suite generated by JTorx is a subset of that test suite generated by Everest. That is, Everest shows all test cases, and associated paths, related to a single fault according to a transition cover criteria over the specification whereas JTorx returns only one test case per fault. All this information may be useful and aid the tester in the fault mitigation process.

In the second scenario, checking the IUT against , the language-based conformance verification was able to detect a fault which was not detected by the ioco  conformance relation. We have obtained the fault model using the regular expressions and . Language clearly expresses behaviors that finish with an external stimulus followed by an output produced in response. Since the only complete test suite is given by and , so we check the condition , i.e., a fault is detected when behaviors of are not present in . Everest then results in a verdict of non-conformance and produces the test suite reaching a fault that is not detected by JTorx using the ioco  relation.

5 Conclusion

Testing of reactive systems is an important and complex activity in the development process for systems of this nature. The complexity of such systems and, consequently, the complexity of the testing task have required high costs and resources in the software development. Therefore automation of the testing activity has become essential in this process. Several studies have addressed the testing of reactive systems [7, 8, 16, 3, 6], specially the conformance checking between IUTs and specifications based on appropriate formalisms to guarantee more reliability.

In this work we have developed an automatic tool for checking conformance on asynchronous reactive systems. We have implemented the more general relation based on regular languages and also the classical ioco  theory. One could observe that the Everest conformance verification process by means regular languages is more effective than the JTorx conformance checking. In the practical applications we could see that JTorx yielded a verdict of conformance whereas Everest could find a fault for the same scenario. We also remark that the main contribution of this work is the tool development together with the designed algorithms, and providing an intuitive graphical interface either for experts in the research area or for beginners with no specific knowledge over the conformance theories.

A new module of Everest tool is already being developed to provide a test suite generation in a black-box setting. We also intend to perform more experiments and comparative studies with similar tools from the literature in order to give a more precise analysis related to the conformance checking issue, specially, with respect to the usability and performance of these tools.

References

  • [1] AUT manual page. https://cadp.inria.fr/man/aut.html. Acessado em: 2019-02-28.
  • [2] JTorX a tool for model-based testing. https://fmt.ewi.utwente.nl/redmine/projects/jtorx/wiki/. Acessado em: 2018-03-23.
  • [3] Bernhard K. Aichernig, Elisabeth Jöbstl, and Stefan Tiran. Model-based mutation testing via symbolic refinement checking. Science of Computer Programming, 97:383 – 404, 2015. Special Issue: Selected Papers from the 12th International Conference on Quality Software (QSIC 2012).
  • [4] Bernhard K. Aichernig and Martin Tappler. Symbolic input-output conformance checking for model-based mutation testing. Electronic Notes in Theoretical Computer Science, 320:3 – 19, 2016. The 1st workshop on Uses of Symbolic Execution (USE).
  • [5] Bernhard K. Aichernig, Martin Weiglhofer, and Franz Wotawa. Improving fault-based conformance testing. Electronic Notes in Theoretical Computer Science, 220(1):63 – 77, 2008. Proceedings of the Fourth Workshop on Model Based Testing (MBT 2008).
  • [6] Saswat Anand, Edmund K. Burke, Tsong Yueh Chen, John Clark, Myra B. Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold, and Phil McMinn. An orchestrated survey of methodologies for automated software test case generation. Journal of Systems and Software, 86(8):1978 – 2001, 2013.
  • [7] Adilson Luiz Bonifacio and Arnaldo Vieira Moura. Complete test suites for input/output systems, 2019.
  • [8] Adenilso da Silva Simão and Alexandre Petrenko. Generating complete and finite test suite for ioco: Is it possible? In Proceedings Ninth Workshop on Model-Based Testing, MBT 2014, Grenoble, France, 6 April 2014., pages 56–70, 2014.
  • [9] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2006.
  • [10] Bruno Legeard Mark Utting. practical model-based testing a tools approach. Elsevier, 1nd edition, 2007.
  • [11] Oracle. Java se development kit 8. http://www.oracle.com/technetwork/pt/java/javase/. Accessed: 2019-01-30.
  • [12] Oracle. Package javax swing. https://docs.oracle.com/javase/7/docs/api/javax/swing/package-summary.html. Accessed: 2019-01-31.
  • [13] Michael Sipser.

    Introduction to the Theory of Computation

    .
    Course Technology, second edition, 2006.
  • [14] G.J. Tretmans. Test Generation with Inputs, Outputs and Repetitive Quiescence. Number TR-CTIT-96-26 in CTIT technical report series. Centre for Telematics and Information Technology (CTIT), Netherlands, 1996. CTIT Tecnnical Report Series 96-26.
  • [15] Jan Tretmans. Testing concurrent systems: A formal approach. In Jos C. M. Baeten and Sjouke Mauw, editors, CONCUR’99 Concurrency Theory, pages 46–65, Berlin, Heidelberg, 1999. Springer Berlin Heidelberg.
  • [16] Jan Tretmans. Model Based Testing with Labelled Transition Systems, pages 1–38. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.