Lightweight Lexical Test Prioritization for Immediate Feedback

02/14/2020
by   Toni Mattis, et al.
0

The practice of unit testing enables programmers to obtain automated feedback on whether a currently edited program is consistent with the expectations specified in test cases. Feedback is most valuable when it happens immediately, as defects can be corrected instantly before they become harder to fix. With growing and longer running test suites, however, feedback is obtained less frequently and lags behind program changes. The objective of test prioritization is to rank tests so that defects, if present, are found as early as possible or with the least costs. While there are numerous static approaches that output a ranking of tests solely based on the current version of a program, we focus on change-based test prioritization, which recommends tests that likely fail in response to the most recent program change. The canonical approach relies on coverage data and prioritizes tests that cover the changed region, but obtaining and updating coverage data is costly. More recently, information retrieval techniques that exploit overlapping vocabulary between change and tests have proven to be powerful, yet lightweight. In this work, we demonstrate the capabilities of information retrieval for prioritizing tests in dynamic programming languages using Python as example. We discuss and measure previously understudied variation points, including how contextual information around a program change can be used, and design alternatives to the widespread TF-IDF retrieval model tailored to retrieving failing tests. To obtain program changes with associated test failures, we designed a tool that generates a large set of faulty changes from version history along with their test results. Using this data set, we compared existing and new lexical prioritization strategies using four open-source Python projects, showing large improvements over untreated and random test orders and results consistent with related work in statically typed languages. We conclude that lightweight IR-based prioritization strategies are effective tools to predict failing tests in the absence of coverage data or when static analysis is intractable like in dynamic languages. This knowledge can benefit both individual programmers that rely on fast feedback, as well as operators of continuous integration infrastructure, where resources can be freed sooner by detecting defects earlier in the build cycle.

READ FULL TEXT

page 9

page 28

research
12/21/2021

AmPyfier: Test Amplification in Python

Test Amplification is a method to extend handwritten tests into a more r...
research
08/12/2021

Small-Amp: Test Amplification in a Dynamically Typed Language

Test amplification is a novel technique which extends a manually created...
research
02/10/2022

Pynguin: Automated Unit Test Generation for Python

Automated unit test generation is a well-known methodology aiming to red...
research
04/11/2023

A Data Set of Generalizable Python Code Change Patterns

Mining repetitive code changes from version control history is a common ...
research
07/28/2020

Automated Unit Test Generation for Python

Automated unit test generation is an established research field, and mat...
research
11/05/2021

Discerning Legitimate Failures From False Alerts: A Study of Chromium's Continuous Integration

Flakiness is a major concern in Software testing. Flaky tests pass and f...
research
10/11/2022

Better Than Whitespace: Information Retrieval for Languages without Custom Tokenizers

Tokenization is a crucial step in information retrieval, especially for ...

Please sign up or login with your details

Forgot password? Click here to reset