Differentially Testing Soundness and Precision of Program Analyzers

12/12/2018
by   Christian Klinger, et al.
0

In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2021

Learning Differentially Private Mechanisms

Differential privacy is a formal, mathematical definition of data privac...
research
07/29/2019

A Case Study on Automated Fuzz Target Generation for Large Codebases

Fuzz Testing is a largely automated testing technique that provides rand...
research
05/14/2020

Symbolic Partial-Order Execution for Testing Multi-Threaded Programs

We describe a technique for systematic testing of multi-threaded program...
research
06/29/2020

A Generative Neural Network Framework for Automated Software Testing

Search Based Software Testing (SBST) is a popular automated testing tech...
research
05/08/2018

Robustness Testing of Intermediate Verifiers

Program verifiers are not exempt from the bugs that affect nearly every ...
research
07/09/2021

Sirius: Static Program Repair with Dependence Graph-Based Systematic Edit Patterns

Software development often involves systematic edits, similar but nonide...
research
01/20/2023

Blind Spots: Automatically detecting ignored program inputs

A blind spot is any input to a program that can be arbitrarily mutated w...

Please sign up or login with your details

Forgot password? Click here to reset