DeepAI AI Chat
Log In Sign Up

LF-checker: Machine Learning Acceleration of Bounded Model Checking for Concurrency Verification (Competition Contribution)

01/22/2023
by   Tong Wu, et al.
0

We describe and evaluate LF-checker, a metaverifier tool based on machine learning. It extracts multiple features of the program under test and predicts the optimal configuration (flags) of a bounded model checker with a decision tree. Our current work is specialised in concurrency verification and employs ESBMC as a back-end verification engine. In the paper, we demonstrate that LF-checker achieves better results than the default configuration of the underlying verification engine.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/14/2020

Synthesis in Uclid5

We describe an integration of program synthesis into Uclid5, a formal mo...
05/16/2020

Distributed Bounded Model Checking

Program verification is a resource-hungry task. This paper looks at the ...
09/10/2021

An Overview of the HFL Model Checking Project

In this article, we give an overview of our project on higher-order prog...
11/05/2019

Plankton: Scalable network configuration verification through model checking

Network configuration verification enables operators to ensure that the ...
12/30/2020

Solving Interactive Fiction Games via Partial Evaluation and Bounded Model Checking

We present a case study on using program verification tools, specificall...
11/13/2019

Systematic Classification of Attackers via Bounded Model Checking

In this work, we study the problem of verification of systems in the pre...
05/26/2017

SpinArt: A Spin-based Verifier for Artifact Systems

Data-driven workflows, of which IBM's Business Artifacts are a prime exp...