JUGE: An Infrastructure for Benchmarking Java Unit Test Generators

06/14/2021
by   Xavier Devroey, et al.
0

Researchers and practitioners have designed and implemented various automated test case generators to support effective software testing. Such generators exist for various languages (e.g., Java, C#, or Python) and for various platforms (e.g., desktop, web, or mobile applications). Such generators exhibit varying effectiveness and efficiency, depending on the testing goals they aim to satisfy (e.g., unit-testing of libraries vs. system-testing of entire applications) and the underlying techniques they implement. In this context, practitioners need to be able to compare different generators to identify the most suited one for their requirements, while researchers seek to identify future research directions. This can be achieved through the systematic execution of large-scale evaluations of different generators. However, the execution of such empirical evaluations is not trivial and requires a substantial effort to collect benchmarks, setup the evaluation infrastructure, and collect and analyse the results. In this paper, we present our JUnit Generation benchmarking infrastructure (JUGE) supporting generators (e.g., search-based, random-based, symbolic execution, etc.) seeking to automate the production of unit tests for various purposes (e.g., validation, regression testing, fault localization, etc.). The primary goal is to reduce the overall effort, ease the comparison of several generators, and enhance the knowledge transfer between academia and industry by standardizing the evaluation and comparison process. Since 2013, eight editions of a unit testing tool competition, co-located with the Search-Based Software Testing Workshop, have taken place and used and updated JUGE. As a result, an increasing amount of tools (over ten) from both academia and industry have been evaluated on JUGE, matured over the years, and allowed the identification of future research directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2022

Pynguin: Automated Unit Test Generation for Python

Automated unit test generation is a well-known methodology aiming to red...
research
02/23/2018

SmartUnit: Empirical Evaluations for Automated Unit Testing of Embedded Software in Industry

In this paper, we aim at the automated unit coverage-based testing for e...
research
12/30/2018

A Systematic Literature Review of Automated Techniques for Functional GUI Testing of Mobile Applications

Context. Multiple automated techniques have been proposed and developed ...
research
03/17/2018

Towards Efficient Data-flow Test Data Generation Using KLEE

Dataflow coverage, one of the white-box testing criteria, focuses on the...
research
06/04/2019

Bridging the Gap between Unit Test Generation and System Test Generation

Common test generators fall into two categories. Generating test inputs ...
research
04/28/2023

Reflections on Surrogate-Assisted Search-Based Testing: A Taxonomy and Two Replication Studies based on Industrial ADAS and Simulink Models

Surrogate-assisted search-based testing (SA-SBT) aims to reduce the comp...
research
12/02/2021

A Generator Framework For Evolving Variant-Rich Software

Evolving software is challenging, even more when it exists in many diffe...

Please sign up or login with your details

Forgot password? Click here to reset