On Benchmarking the Capability of Symbolic Execution Tools with Logic Bombs

12/05/2017
by   Hui Xu, et al.
0

Symbolic execution is an important software testing approach. It has been widely employed in program analysis, such as bug detection and malware analysis. However, the approach is not overwhelming because it suffers many issues, including the well-known scalability issue and other challenges, such as handling floating-point numbers and symbolic memories. Currently, several symbolic execution tools are available off-the-shelf, but they generally do not demonstrate their limitations clearly to users. Yet, we have no effective approach to benchmark their capabilities. Without such information, users would not know which tool to choose, or how reliable their program analysis results are based on particular symbolic execution tools. To address such concerns, this paper proposes a novel approach to benchmark symbolic execution tools. Our approach is based on logic bombs which are guarded by particular challenging problems. If a symbolic execution tool can find test cases to trigger such logic bombs during evaluation, it indicates that the tool can handle corresponding problems. Following the idea, we have designed an automated benchmark framework and a dataset of logic bombs covering 12 different challenges. Then we have conducted real-world experiments on benchmarking three popular symbolic execution tools: KLEE, Angr, and Triton. Experimental results show that our approach can reveal their limitations in handling particular issues accurately and efficiently. The benchmark process generally takes only a few minutes to evaluate a tool. To better serve the community, we release our toolset on GitHub as open source, and we hope it would serve as an essential tool to benchmark symbolic execution tools in the future.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset