DeepAI AI Chat
Log In Sign Up

LF-checker: Machine Learning Acceleration of Bounded Model Checking for Concurrency Verification (Competition Contribution)

by   Tong Wu, et al.

We describe and evaluate LF-checker, a metaverifier tool based on machine learning. It extracts multiple features of the program under test and predicts the optimal configuration (flags) of a bounded model checker with a decision tree. Our current work is specialised in concurrency verification and employs ESBMC as a back-end verification engine. In the paper, we demonstrate that LF-checker achieves better results than the default configuration of the underlying verification engine.


page 1

page 2

page 3

page 4


Synthesis in Uclid5

We describe an integration of program synthesis into Uclid5, a formal mo...

Distributed Bounded Model Checking

Program verification is a resource-hungry task. This paper looks at the ...

An Overview of the HFL Model Checking Project

In this article, we give an overview of our project on higher-order prog...

Plankton: Scalable network configuration verification through model checking

Network configuration verification enables operators to ensure that the ...

Solving Interactive Fiction Games via Partial Evaluation and Bounded Model Checking

We present a case study on using program verification tools, specificall...

Systematic Classification of Attackers via Bounded Model Checking

In this work, we study the problem of verification of systems in the pre...

SpinArt: A Spin-based Verifier for Artifact Systems

Data-driven workflows, of which IBM's Business Artifacts are a prime exp...