Automated, Targeted Testing of Property-Based Testing Predicates

11/19/2021
by   Tim Nelson, et al.
0

Context: This work is based on property-based testing (PBT). PBT is an increasingly important form of software testing. Furthermore, it serves as a concrete gateway into the abstract area of formal methods. Specifically, we focus on students learning PBT methods. Inquiry: How well do students do at PBT? Our goal is to assess the quality of the predicates they write as part of PBT. Prior work introduced the idea of decomposing the predicate's property into a conjunction of independent subproperties. Testing the predicate against each subproperty gives a "semantic" understanding of their performance. Approach: The notion of independence of subproperties both seems intuitive and was an important condition in prior work. First, we show that this condition is overly restrictive and might hide valuable information: it both undercounts errors and makes it hard to capture misconceptions. Second, we introduce two forms of automation, one based on PBT tools and the other on SAT-solving, to enable testing of student predicates. Third, we compare the output of these automated tools against manually-constructed tests. Fourth, we also measure the performance of those tools. Finally, we re-assess student performance reported in prior work. Knowledge: We show the difficulty caused by the independent subproperty requirement. We provide insight into how to use automation effectively to assess PBT predicates. In particular, we discuss the steps we had to take to beat human performance. We also provide insight into how to make the automation work efficiently. Finally, we present a much richer account than prior work of how students did. Grounding: Our methods are grounded in mathematical logic. We also make use of well-understood principles of test generation from more formal specifications. This combination ensures the soundness of our work. We use standard methods to measure performance. Importance: As both educators and programmers, we believe PBT is a valuable tool for students to learn, and its importance will only grow as more developers appreciate its value. Effective teaching requires a clear understanding of student knowledge and progress. Our methods enable a rich and automated analysis of student performance on PBT that yields insight into their understanding and can capture misconceptions. We therefore expect these results to be valuable to educators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2020

Using Relational Problems to Teach Property-Based Testing

Context: The success of QuickCheck has led to the development of propert...
research
05/24/2022

Using ACL2 To Teach Students About Software Testing

We report on our experience using ACL2 in the classroom to teach student...
research
04/26/2022

Evaluating Automatic Difficulty Estimation of Logic Formalization Exercises

Teaching logic effectively requires an understanding of the factors whic...
research
01/14/2020

Automated Anonymisation of Visual and Audio Data in Classroom Studies

Understanding students' and teachers' verbal and non-verbal behaviours d...
research
12/21/2021

Toolset for Collecting Shell Commands and Its Application in Hands-on Cybersecurity Training

When learning cybersecurity, operating systems, or networking, students ...
research
10/23/2018

DCLab: A Web-based System for Digital Logic Experiment Teaching

This Research-to-Practice Work in Progress paper presents DCLab, a web-b...
research
03/05/2021

An automated approach to mitigate transcription errors in braille texts for the Portuguese language

The quota system in Brazil made it possible to include blind students in...

Please sign up or login with your details

Forgot password? Click here to reset