Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction

09/23/2022
by   Sungmin Kang, et al.
0

Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28 difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose LIBRO, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks. Since LLMs themselves cannot execute the target buggy code, we focus on post-processing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33 while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination, we also evaluate LIBRO against 31 bug reports submitted after the collection of the LLM training data terminated: LIBRO produces bug reproducing tests for 32 results show LIBRO has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.

READ FULL TEXT
research
08/31/2023

Effective Test Generation Using Pre-trained Large Language Models and Mutation Testing

One of the critical phases in software development is software testing. ...
research
01/18/2023

Automatically Reproducing Android Bug Reports Using Natural Language Processing and Reinforcement Learning

As part of the process of resolving issues submitted by users via bug re...
research
01/03/2023

An Empirical Investigation into the Reproduction of Bug Reports for Android Apps

One of the key tasks related to ensuring mobile app quality is the repor...
research
06/03/2023

Prompting Is All Your Need: Automated Android Bug Replay with Large Language Models

Bug reports are vital for software maintenance that allow users to infor...
research
12/13/2022

Fonte: Finding Bug Inducing Commits from Failures

A Bug Inducing Commit (BIC) is a commit that introduces a software bug i...
research
06/14/2022

Using Defect Prediction to Improve the Bug Detection Capability of Search-Based Software Testing

Automated test generators, such as search based software testing (SBST) ...
research
09/02/2022

FuzzerAid: Grouping Fuzzed Crashes Based On Fault Signatures

Fuzzing has been an important approach for finding bugs and vulnerabilit...

Please sign up or login with your details

Forgot password? Click here to reset