Reproducible Execution of POSIX Programs with DiOS

07/07/2019
by   Petr Ročkai, et al.
Masarykova univerzita
0

In this paper, we describe DiOS, a lightweight model operating system which can be used to execute programs that make use of POSIX APIs. Such executions are fully reproducible: running the same program with the same inputs twice will result in two exactly identical instruction traces, even if the program uses threads for parallelism. DiOS is implemented almost entirely in portable C and C++: although its primary platform is DiVM, a verification-oriented virtual machine, it can be configured to also run in KLEE, a symbolic executor. Finally, it can be compiled into machine code to serve as a user-mode kernel. Additionally, DiOS is modular and extensible. Its various components can be combined to match both the capabilities of the underlying platform and to provide services required by a particular program. New components can be added to cover additional system calls or APIs. The experimental evaluation has two parts. DiOS is first evaluated as a component of a program verification platform based on DiVM. In the second part, we consider its portability and modularity by combining it with the symbolic executor KLEE.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/01/2020

TracerX: Dynamic Symbolic Execution with Interpolation

Dynamic Symbolic Execution (DSE) is an important method for the testing ...
05/31/2018

Symbolic Computation via Program Transformation

Symbolic computation is an important approach in automated program analy...
08/06/2021

Explaining Counterexamples with Giant-Step Assertion Checking

Identifying the cause of a proof failure during deductive verification o...
11/05/2017

Beyond Profiling: Scaling Profiling Data Usage to Multiple Applications

Profiling techniques are used extensively at different parts of the comp...
01/08/2018

Tamarin: Concolic Disequivalence for MIPS

Given two MIPS programs, when are they equivalent? At first glance, this...
08/06/2021

Deductive Verification via the Debug Adapter Protocol

We propose a conceptual integration of deductive program verification in...
03/18/2018

Code Vectors: Understanding Programs Through Embedded Abstracted Symbolic Traces

With the rise of machine learning, there is a great deal of interest in ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

2 Platform Interface

3 Supported Platforms

4 Design and Architecture

5 Evaluation

6 Conclusions & Future Work

References

  • Baranová et al. [2017] Zuzana Baranová, Jiří Barnat, Katarína Kejstová, Tadeáš Kučera, Henrich Lauko, Jan Mrázek, Petr Ročkai, and Vladimír Štill. Model checking of C and C++ with DIVINE 4. 2017.
  • Beyer [2016] Dirk Beyer. Reliable and reproducible competition results with BenchExec and witnesses report on SV-COMP 2016. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 887–904. Springer, 2016. ISBN 978-3-662-49673-2. doi: 10.1007/978-3-662-49674-9˙55.
  • Cadar et al. [2008] Cristian Cadar, Daniel Dunbar, and Dawson R. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Operating Systems Design and Implementation, pages 209–224. USENIX Association, 2008.
  • Chirigati et al. [2013] Fernando Chirigati, Dennis Shasha, and Juliana Freire. Reprozip: Using provenance to support computational reproducibility. In Proceedings of the 5th USENIX Workshop on the Theory and Practice of Provenance, TaPP ’13, pages 1:1–1:4, Berkeley, CA, USA, 2013. USENIX Association. URL http://dl.acm.org/citation.cfm?id=2482949.2482951.
  • Frew et al. [2008] James Frew, Dominic Metzger, and Peter Slaughter. Automatic capture and reconstruction of computational provenance. Concurr. Comput. : Pract. Exper., 20(5):485–496, April 2008. ISSN 1532-0626. doi: 10.1002/cpe.v20:5. URL http://dx.doi.org/10.1002/cpe.v20:5.
  • Inverso et al. [2015] O. Inverso, T. L. Nguyen, B. Fischer, S. L. Torre, and G. Parlato. Lazy-CSeq: A context-bounded model checking tool for multi-threaded C-programs. In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 807–812, Nov 2015. doi: 10.1109/ASE.2015.108.
  • Joshi and Orso [2007] Shrinivas Joshi and Alessandro Orso. Scarpe: A technique and tool for selective capture and replay of program executions. pages 234 – 243, 11 2007. ISBN 978-1-4244-1256-3. doi: 10.1109/ICSM.2007.4362636.
  • Kejstová [2019] Katarína Kejstová. Model checking with system call traces. Master’s thesis, Masarykova univerzita, Fakulta informatiky, Brno, 2019. URL http://is.muni.cz/th/tukvk/.
  • Kejstová et al. [2017] Katarína Kejstová, Petr Ročkai, and Jiří Barnat. From model checking to runtime verification and back. In Runtime Verification, volume 10548 of LNCS, pages 225–240. Springer, 2017. doi: 10.1007/978-3-319-67531-2˙14.
  • Kong et al. [2009] Soonho Kong, Nikolai Tillmann, and Jonathan de Halleux. Automated testing of environment-dependent programs - a case study of modeling the file system for pex. pages 758–762, 01 2009. doi: 10.1109/ITNG.2009.80.
  • Krekel et al. [2004] Holger Krekel, Bruno Oliveira, Ronny Pfannschmidt, Floris Bruynooghe, Brianna Laugher, and Florian Bruhin. pytest 4.5, 2004. URL https://github.com/pytest-dev/pytest.
  • Lauko et al. [2019] Henrich Lauko, Vladimír Štill, Petr Ročkai, and Jiří Barnat. Extending DIVINE with symbolic verification using SMT. In Dirk Beyer, Marieke Huisman, Fabrice Kordon, and Bernhard Steffen, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 204–208, Cham, 2019. Springer International Publishing. ISBN 978-3-030-17502-3.
  • Mackinnon et al. [2001] Tim Mackinnon, Steve Freeman, and Philip Craig. Extreme programming examined. chapter Endo-testing: Unit Testing with Mock Objects, pages 287–301. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2001. ISBN 0-201-71040-4. URL http://dl.acm.org/citation.cfm?id=377517.377534.
  • Mostafa and Wang [2014] S. Mostafa and X. Wang. An empirical study on the usage of mocking frameworks in software testing. In 2014 14th International Conference on Quality Software, pages 127–132, Oct 2014. doi: 10.1109/QSIC.2014.19.
  • Musuvathi et al. [2008] Madan Musuvathi, Shaz Qadeer, Tom Ball, Gerard Basler, Piramanayakam Arumuga Nainar, and Iulian Neamtiu. Finding and reproducing heisenbugs in concurrent programs. In Symposium on Operating Systems Design and Implementation. USENIX, December 2008.
  • Ročkai et al. [2018] Petr Ročkai, Vladimír Štill, Ivana Černá, and Jiří Barnat. DiVM: Model checking with LLVM and graph memory. Journal of Systems and Software, 143:1 – 13, 2018. ISSN 0164-1212. doi: https://doi.org/10.1016/j.jss.2018.04.026.
  • Wachter et al. [2013] Björn Wachter, Daniel Kroening, and Joel Ouaknine. Verifying multi-threaded software with impact. In Formal Methods in Computer-Aided Design, pages 210–217. IEEE, 10 2013. doi: 10.1109/FMCAD.2013.6679412.
  • Yang et al. [2008] Yu Yang, Xiaofang Chen, and Ganesh Gopalakrishnan. Inspect: A runtime model checker for multithreaded c programs. Technical report, 2008.
  • Štill et al. [2017] Vladimír Štill, Petr Ročkai, and Jiří Barnat. Using off-the-shelf exception support components in C++ verification. In Software Quality, Reliability and Security (QRS), pages 54–64, 2017.