LF-checker: Machine Learning Acceleration of Bounded Model Checking for Concurrency Verification (Competition Contribution)

01/22/2023
by   Tong Wu, et al.
1

We describe and evaluate LF-checker, a metaverifier tool based on machine learning. It extracts multiple features of the program under test and predicts the optimal configuration (flags) of a bounded model checker with a decision tree. Our current work is specialised in concurrency verification and employs ESBMC as a back-end verification engine. In the paper, we demonstrate that LF-checker achieves better results than the default configuration of the underlying verification engine.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2020

Synthesis in Uclid5

We describe an integration of program synthesis into Uclid5, a formal mo...
research
05/16/2020

Distributed Bounded Model Checking

Program verification is a resource-hungry task. This paper looks at the ...
research
09/10/2021

An Overview of the HFL Model Checking Project

In this article, we give an overview of our project on higher-order prog...
research
11/05/2019

Plankton: Scalable network configuration verification through model checking

Network configuration verification enables operators to ensure that the ...
research
07/10/2023

Model-checking parametric lock-sharing systems against regular constraints

In parametric lock-sharing systems processes can spawn new processes to ...
research
11/13/2019

Systematic Classification of Attackers via Bounded Model Checking

In this work, we study the problem of verification of systems in the pre...
research
05/18/2023

Lightweight Online Learning for Sets of Related Problems in Automated Reasoning

We present Self-Driven Strategy Learning (sdsl), a lightweight online le...

Please sign up or login with your details

Forgot password? Click here to reset