RTj: a Java framework for detecting and refactoring rotten green test cases

Rotten green tests are passing tests which have, at least, one assertion not executed. They give developers a false confidence. In this paper, we present, RTj, a framework that analyzes test cases from Java projects with the goal of detecting and refactoring rotten test cases. RTj automatically discovered 427 rotten tests from 26 open-source Java projects hosted on GitHub. Using RTj, developers have an automated recommendation of the tests that need to be modified for improving the quality of the applications under test.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/20/2018

Automatic Test Improvement with DSpot: a Study with Ten Mature Open-Source Projects

In the literature, there is a rather clear segregation between manually ...
02/23/2021

Automating Test Case Identification in Open Source Projects on GitHub

Software testing is one of the very important Quality Assurance (QA) com...
07/31/2019

Comprehending Test Code: An Empirical Study

Developers spend a large portion of their time and effort on comprehendi...
06/25/2020

Did You Remember to Test Your Tokens?

Authentication is a critical security feature for confirming the identit...
01/12/2019

EvoMaster: Evolutionary Multi-context Automated System Test Generation

This paper presents EvoMaster, an open-source tool that is able to autom...
01/22/2021

An Empirical Study of Flaky Tests in Python

Tests that cause spurious failures without any code changes, i.e., flaky...
04/21/2020

Challenges and guidelines on designing test cases for test bots

Test bots are automated testing tools that autonomously and periodically...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Software developers write unit test cases with the goal of improving code quality and preventing code regression. Passing (green) tests are usually taken as a robust sign that the code under test is valid (Delplanque:2019:RGT, ). However, a passing test can have, at the same time, a poor design, which detracts from maintainability. Such cases are known as Smelly Tests (Deursen:2001:RTC, ; Reic07a, ).

A Rotten Green Test is even a stronger test problem: a rotten green test is a test that passes (is green) but contains assertions that are never executed (Delplanque:2019:RGT, ). Rotten tests give developers false confidence because, beyond passing, an assertion that should validate some property is, in fact, not executed. Previous work has shown the presence of rotten test cases in software written in Pharo (Delplanque:2019:RGT, ).

In this paper, we present RTj

, a framework for detecting and refactoring smelly tests written in Java. The current implementation analyzes JUnit tests and classifies them according to the categories of rotten green tests presented in

(Delplanque:2019:RGT, ). For doing that analysis, RTj takes as input the source code of the program under analysis including their test cases. It does a static analysis for detecting the code elements from a test case (assertion, helpers, etc.) and a dynamic analysis to determine the execution of such elements. RTj produces as output a detailed report with all rotten and smelly tests found and, when possible, a refactored version of such tests.

We executed RTj on 67 open-source projects written in Java hosted on GitHub with more than 1000 stars and 100 forks. We found 427 rotten test cases from 26 projects. Our results show the importance of having a tool that analyzes the quality of test cases: developers from one third of the projects analyzed trust the passing tests which, beyond having assertions that validate some properties, are not executed.

RTj can be used by both researchers and software practitioners. Researchers can use it for carrying out different empirical studies of test cases, and to write, using the extension mechanism proposed by RTj, analyzers able to detect new cases of rotten and smelly tests. Software practitioners can use RTj for analyzing and refactoring their own software with the goal of improving the quality of their applications.

This paper continues as follows: Section 2 defines rotten green tests and its categories. Section 3 presents the architecture of RTj. Section 4 presents rotten tests found by RTj. Section 5 presents the related work. Section 6 presents a discussion about the future work around RTj. Section 7 concludes the paper.

RTj is publicly available at: https://github.com/UPHF/RTj

2. Test Case Categorization

A rotten green test contains a call site for an assertion primitive or a test helper but this assertion or helper was not invoked during test execution. Not all rotten green tests are caused by the same problem. This section gives a brief description of the four categories presented in (Delplanque:2019:RGT, ).

  • A Context-dependent test contains conditionals with different assertions in the different branches.

  • A Missed Fail test contains an assertion which was forced to fail.

  • A Skip test contains guards to stop their execution early under certain conditions.

  • Fully Rotten tests do not execute one or many assertions, and do not fall into any of the previous categories.

Input: program under analysis
      Output: Labels and refactored tests

1:
2:
3:
4:,
5:,
6:for  in  do
7:     for  in  do
8:         
9:         
10:         
11:         
12:         
13:         
14:         
15:         if  then
16:                             return
Algorithm 1 RTj: analysis and refactor of test cases.

3. Architecture

RTj executes the steps presented in Algorithm 1. This section describes each of them. The input is the source code of program under analysis, including its test cases (). The output is twofold: (1) a list of the test cases analyzed with labels, and (2) refactored tests.

3.1. Basic analysis steps

Step 1: Creation of the program model

First, RTj creates a model that represents and its test cases (line 1). RTj takes as input the source code of and generates a model based on Spoon meta-model (spoon, ). The generated model is an enriched abstract syntax tree (AST).

Step 2: Test cases detection

RTj searches for all test cases written in (line 2). For that, RTj filters from the model all code elements (e.g., methods) that correspond to test cases. By default, RTj

analyzes projects that use the testing framework 4.X. Consequently, one of the heuristics

RTj applies is to filter methods with the annotation org.junit.Test.

Step 3: Instrumented test case execution

RTj executes the test cases in an instrumented version of (line 3). The goal of the instrumentation is to trace the executed lines (from both the application and tests) by each test case. The output of this step ( on line 3) is twofold: 1) the result of each test case (e.g., passing, failing), 2) for each line from , it contains the test cases that executed and the number of times that was executed. Note that if even a line belongs to a test, it can be executed zero times (e.g., is inside an If statement).

3.2. Test analysis

RTj iterates over the list of test cases found (line 6) to analyze each of them () using Test Analyzers.

A Test Analyzer is a component that takes as input a test , analyzes it given a specific goal and, possibly, proposes , a refactored version of . RTj has at least one Test Analyzer for each rotten category presented in Section 2. For example, for Context Dependent there are one analyzer for detecting rotten assertion from a test, another for detecting rotten calls to helper, and a third one for detecting rotten assertions inside an invoked helper method. RTj iterates over the Test Analyzer (line 7), and applies them on each test .

A Test Analyzer has four main responsibilities, which are described below.

1) Identification of test elements: a Test Analyzer does a static analysis (line 8) which consists of parsing the generated model to identify all code elements (e.g., Assertions, Helpers, fails, returns) that it needs to classify a test into a specific category.

2) Dynamic analysis of test elements: A Test Analyzer analyzes the execution of the elements it filtered in the previous step (line 10). RTj provides a procedure which determines, given the dynamic analysis information (), whether an element was executed or not during the execution of a test case .

3) Test classification: A Test Analyzer classifies a test case (line 12). It receives as parameters the elements that are useful for its goal, the dynamic information for them (i.e., if those were executed by ), and produces as output a set of labels (possibly empty), where each label indicates a particular classification of such as Fully Rotten test, Missed fail, etc.

4) Test Refactor: A Test Analyzer is able to propose to the user the candidate refactorings of the test case (line 14). It receives as input the generated model, and creates the refactoring by transforming a cloned model. The Spoon model created by default has an API for applying transformations (e.g., to replace, insert or remove a code element) and generates source code from a modified model.

3.3. Test Analyzers included in RTj

RTj implements Test Analyzers that are able to detect all categories of rotten green tests defined in (Delplanque:2019:RGT, ) and described in Section 2. We now briefly describe some of them.

Assertion Rotten analyzer: It determines if a test has rotten assertions by parsing the test’s model to identify static method invocations whose names start with the keyword “assert” and the class of the invocation’s target is org.junit.Assert. Then, this analyser labels a test as ‘Assertion Rotten’ if one of those assertions was not executed. RTj splits this analyzer in two: (1) Context-dependent assertion rotten test: if the rotten element is inside an if-else. (2) Fully rotten assertion test: if it is inside another element.

Rotten Call Helper analyzer: Unlike the previous analyzer, it looks for method calls to helpers. It determines that a method is a helper if it has: (1) an assertion, or (2) an invocation to a method helper. RTj also distinguishes between (1) Context-dependent, and (2) Fully rottencases.

Rotten Assertion in Helper analyzer: This analyzer checks whether an invoked helper does not execute an assertion written in .

Skip Test analyzer: identifies return statements in test . Then, it classifies a as “Skip” if the return was executed and, there are no executed assertions written below that return.

Missed Fail analyzer: searches for assertions that are forced to fail, e.g., assertTrue(false). This analyzer does not check if such assertion were executed.

Smoke test: neither contains any assertion nor helper calls. Note that (Delplanque:2019:RGT, ) does not categorize a smoke test as rotten green test.

The current implementation of RTj provides 2 refactorings. Replacement of missing fail: the missing fail analyzer proposes a refactoring that replaces the assertions forced to fail by invocations to .

Add comment: all analyzers can add a TODO comment just before the rotten code element. IDEs such as Eclipse display such TODO comments in a dedicated view.

4. Evaluation

4.1. Methodology

We aimed to study popular open-source Java projects. For that, we first selected from GitHub projects having: (1) Java as main language, (2) JUnit 4 as testing framework, (3) Maven as dependency manager system, (4) more than 1000 stars and 100 forks. Then, we executed RTj on 67 of them. The execution time of RTj mostly depends on the execution time of the instrumented test cases and on the number of test cases. It took from 1 minute (Streamex project) to 20 minutes (XChange).

4.2. Results

Table 1 summarizes the results and shows the number of rotten test cases (by category) from the 10 projects with larger number of rotten tests.

RTj found in total 427 rotten green tests on 26 projects. This means that the 38% of the project analyzed (26 out of 67) have, at least, one rotten green test. The majority of the rotten tests found by RTj are from two rotten categories: 253 Context dependent tests from 18 projects, and 110 Fully rotten tests from 16 projects (in total 23 distinct projects). This means that around one out of three projects (23/67) has passing tests that do not execute assertions written in such tests. Developers of such projects are trusting the results of those rotten tests (passing); however, they are very likely unaware that some validations written in them are not being executed.

Top-10 Context Missed Skip Fully Total
projects dependent Fail Test Rotten
Optaplanner 104 0 0 7 111
Flink-core 26 2 9 10 47
Streamex 39 0 0 2 41
Bt 33 0 1 4 38
XChange 1 0 15 22 38
Handlebars 0 0 0 30 30
Joda-time 5 3 17 1 26
Jeromq 12 0 0 4 16
Wasabi 0 0 0 15 15
Mahout 1 0 5 1 7
Total
Rotten Tests 253 16 48 110 427
Projects affected 18 6 6 16 26
Table 1. The table shows the 10 projects with largest number of rotten green tests. At the bottom, it summarizes the total of rotten tests found and the number of projects affected by each rotten category.

4.3. Illustrative cases

This section presents one rotten green tests found by RTj test per rotten category.

4.3.1. Context Dependent Rotten Assertion Test

Test LambdaExtractionTest. testCoGroupLambda() from project Apache-Flink belongs to this category.

206@Test public void testCoGroupLambda() {
207  CoGroupFunction<Tuple2<…>> f = (i1, i2, o) -> {};
208  TypeInformation<?> ti = TypeExtractor.getCoGroupReturnTypes(f, …);
209  if (!(ti instanceof MissingTypeInfo)) {
210            assertTrue(ti.isTupleType());
211            assertEquals(2, ti.getArity());
212         }
213 }

The test has rotten assertions located inside the Then branch of an If (line 209), whose condition’s evaluation is always false.

4.3.2. Fully Rotten Test

Test BucketLeapArrayTest. testListWindowsNewBucket() from project Alibaba-Sentinel is Fully rotten.

209    @Test
210    public void testListWindowsNewBucket() throws Exception {
211        
212        BucketLeapArray leapArray = new BucketLeapArray(sampleCount, intervalInMs);
213        ….
214        List<WindowWrap<MetricBucket>> list = leapArray.list();
215        for (WindowWrap<MetricBucket> wrap : list) {
216            assertTrue(windowWraps.contains(wrap));
217        }

The test has a For that iterates over a list. Inside the loop body, the test has an assert (line 216). As the list is always empty, the assertion is never executed.

4.3.3. Skip Rotten Test

Test testNormalizedKeyReadWriter() written in test helper ComparatorTestBase from project Apache-flink is an Skip test.

371@Test public void testNormalizedKeyReadWriter() {
372  
373   TypeComparator<T> comp1 = getComparator(true);
374   if(!comp1.supportsSerializationWithKeyNormalization()){
375     return;
376   }
377   
378   assertTrue(comp1.compareToReference(comp2) == 0);
379  
380 }

Every execution of this helper does not execute any assertion written in it (e.g., line 378), because the guard at line 374 is always true. Thus the return from line 375 is always executed.

4.3.4. Missed fail

RTj also detected instances of missed fail. For example, in test testHasProtectedConstructor from the Reflectasm project, the developer used assertTrue(false) instead of using fails().

4.4. False Positive Cases

As reported by (Delplanque:2019:RGT, ), the detection of rotten test cases in Pharo suffered the presence of false positives due to conditional use or multiple test contexts. We have implemented some heuristics in RTj to detect such cases and present them as special cases. For instance, RTj labels a test as “Both-branches-with-Assertion” Context-dependent when : 1) has an If with Then and Else branches, 2) both branches have an assertion or a helper call, and 3) only one branch is executed. One such case is test testJdk9Basics() from project Streamex, a library for enhancing Java 8 Streams.

59@Test public void testJdk9Basics() {
60        MethodHandle[][] jdk9Methods = Java9Specific.initJdk9Methods();
61        if (Stream.of(Stream.class.getMethods()).anyMatch(m -> m.getName().equals(”takeWhile”)))
62            assertNotNull(jdk9Methods);
63        else
64            assertNull(jdk9Methods);
65    }

The test has an If with assertions in the two branches: one for asserting Java 9 code, the other for asserting the others versions. The test executes only one single branch and it depends on the JDK used for running it.

5. Related Work

ReAssert (Daniel:2011:RTR, ) and TestCareAssistant (Mirzaaghaei:2010:ART, ), are two tools that can automatically suggest repairs for broken Junit tests. Our tool focuses on analyzing and refactoring passing (i.e., no broken) tests.

Deursen et al. (Deursen:2001:RTC, ) presented 11 bad code smells that are specific for test code and proposed 6 test refactorings that aim to improve test understandability, readability, and maintainability. Reichnard et al. (Reic07a, ) presented a tool and an extended list of identified test smells. Oliveto et al. (Oliveto:2012:EAD, ) have conducted an empirical study for analyzing the distribution of 9 test smells from (Deursen:2001:RTC, ) in real software applications. They found that the 82% of JUnit classes analyzed were affected by at least one test smell. This result shows the importance of providing developers a unified framework for detecting and refactoring smelly tests.

6. Discussion

6.1. Extending RTj

RTj provides Extension points that allow users to override the default behaviour of the framework and to add new functionality.

Model Creation: RTj provides an extension point to override the procedure that creates the model of the program under analysis (Section 3.1). This allows RTj to use other program meta-models. For instance, an extension could create a FAMIX Java model (ducasse:hal-00646884, ) using the tool VerveineJ.111https://github.com/moosetechnology/VerveineJ Note that the use of a new meta-model requires that the Test Analyzers be capable of analyzing and transforming the new model built from .

Execution of test cases: By default, RTj is able to run JUnit 4.X and to detect element defined in JUnit’s API (e.g. Assertion). This extension point makes it possible to execute other testing frameworks such as JUnit 5 or TestNG.

Addition of new Test Analyzer: This extension point allows users to plug new Test Analyzers for studying cases not already considered. RTj represents a Test Analyzer as a Java interface that declares four methods, and each one corresponds to one step presented in section 3.2. By default, analyzers must interact with a Spoon model representing the program under analysis. The Spoon meta-model was designed to be easily understandable by Java developers, and provides developers different well-documented mechanisms to write program analyses and transformations.

Result output: By default, RTj generates a Json file that lists all the cases found by the analyzers. RTj provides an extension point for generating new types of outputs (e.g., reports).

6.2. Future work

We are currently performing a large-scale empirical study on rotten green tests from popular open-source Java projects (in terms of stars, forks, commits, committers, etc) hosted on GitHub.

Beyond rotten green tests, we will focus on other kinds of smelly tests, and even on refactorings that have other objectives. For instance, we are currently developing an analyzer in RTj that detects and refactors Impure test (Xuan:2016:BRE, ), with the goal of improving dynamic analysis tasks such as fault localization and automated program repair.

We will add the ability to analyze tests from other Java testing frameworks such as JUnit 5 and TestNG, and propose refactors for the rotten green tests presented in this paper. Moreover, as the current implementation of RTj provides a command-line interface, we plan to create plug-ins for Maven and IDEs (Eclipse and IntelliJ).

7. Conclusion

This paper presents RTj, a framework that analyzes (statically and dynamically) test cases with the goal of detecting smelly tests, including rotten green test. Rotten green test is a serious problem because give developers false confidence in the system under tests. RTj aims at helping developers to automatically detect those smelly tests, which can then be refactored to improve the quality of an application. RTj also proposes to developers candidate test refactorings. Using RTj we found 427 rotten green tests from 26 open-source projects hosted on GitHub. The design of RTj allows users to extend the framework by adding new analyzers focusing on other kinds of smelly tests and on other testing frameworks.

RTj is publicy available at: https://github.com/UPHF/RTj

References

  • [1] Brett Daniel, Danny Dig, Tihomir Gvero, Vilas Jagannath, Johnston Jiaa, Damion Mitchell, Jurand Nogiec, Shin Hwei Tan, and Darko Marinov. Reassert: A tool for repairing broken unit tests. In Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pages 1010–1012, New York, NY, USA, 2011. ACM.
  • [2] Julien Delplanque, Stéphane Ducasse, Guillermo Polito, Andrew P. Black, and Anne Etien. Rotten green tests. In Proceedings of the 41st International Conference on Software Engineering, ICSE ’19, pages 500–511, Piscataway, NJ, USA, 2019. IEEE Press.
  • [3] Arie Deursen, Leon M.F. Moonen, A. Bergh, and Gerard Kok. Refactoring test code. Technical report, Amsterdam, The Netherlands, The Netherlands, 2001.
  • [4] Stéphane Ducasse, Nicolas Anquetil, Muhammad Usman Bhatti, Andre Cavalcante Hora, Jannik Laval, and Tudor Girba. MSE and FAMIX 3.0: an Interexchange Format and Source Code Model Family. Research report, November 2011.
  • [5] Mehdi Mirzaaghaei, Fabrizio Pastore, and Mauro Pezze. Automatically repairing test cases for evolving method declarations. In Proceedings of the 2010 IEEE International Conference on Software Maintenance, ICSM ’10, pages 1–5, Washington, DC, USA, 2010. IEEE Computer Society.
  • [6] Rocco Oliveto, Andrea De Lucia, Abdallah Qusef, David Binkley, and Gabriele Bavota. An empirical analysis of the distribution of unit test smells and their impact on software maintenance. In Proceedings of the 2012 IEEE International Conference on Software Maintenance (ICSM), ICSM ’12, pages 56–65, Washington, DC, USA, 2012. IEEE Computer Society.
  • [7] Renaud Pawlak, Martin Monperrus, Nicolas Petitprez, Carlos Noguera, and Lionel Seinturier. Spoon: A library for implementing analyses and transformations of java source code. Software: Practice and Experience, 46:1155–1179, 2015.
  • [8] Stefan Reichhart, Tudor Gîrba, and Stéphane Ducasse. Rule-based assessment of test quality. In Journal of Object Technology, Special Issue. Proceedings of TOOLS Europe 2007, volume 6/9, pages 231–251, October 2007. Special Issue. Proceedings of TOOLS Europe 2007.
  • [9] Jifeng Xuan, Benoit Cornu, Matias Martinez, Benoit Baudry, Lionel Seinturier, and Martin Monperrus. B-refactoring. Inf. Softw. Technol., 76(C):65–80, August 2016.