NLPLego: Assembling Test Generation for Natural Language Processing Applications

02/21/2023
by   Pin Ji, et al.
0

The development of modern NLP applications often relies on various benchmark datasets containing plenty of manually labeled tests to evaluate performance. While constructing datasets often costs many resources, the performance on the held-out data may not properly reflect their capability in real-world application scenarios and thus cause tremendous misunderstanding and monetary loss. To alleviate this problem, in this paper, we propose an automated test generation method for detecting erroneous behaviors of various NLP applications. Our method is designed based on the sentence parsing process of classic linguistics, and thus it is capable of assembling basic grammatical elements and adjuncts into a grammatically correct test with proper oracle information. We implement this method into NLPLego, which is designed to fully exploit the potential of seed sentences to automate the test generation. NLPLego disassembles the seed sentence into the template and adjuncts and then generates new sentences by assembling context-appropriate adjuncts with the template in a specific order. Unlike the taskspecific methods, the tests generated by NLPLego have derivation relations and different degrees of variation, which makes constructing appropriate metamorphic relations easier. Thus, NLPLego is general, meaning it can meet the testing requirements of various NLP applications. To validate NLPLego, we experiment with three common NLP tasks, identifying failures in four state-of-art models. Given seed tests from SQuAD 2.0, SST, and QQP, NLPLego successfully detects 1,732, 5301, and 261,879 incorrect behaviors with around 95.7 respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2017

Provenance and Pseudo-Provenance for Seeded Learning-Based Automated Test Generation

Many methods for automated software test generation, including some that...
research
01/16/2019

Sentence transition matrix: An efficient approach that preserves sentence semantics

Sentence embedding is a significant research topic in the field of natur...
research
05/08/2020

Beyond Accuracy: Behavioral Testing of NLP models with CheckList

Although measuring held-out accuracy has been the primary approach to ev...
research
05/26/2023

ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation

Paraphrase generation is a long-standing task in natural language proces...
research
03/12/2017

Why we have switched from building full-fledged taxonomies to simply detecting hypernymy relations

The study of taxonomies and hypernymy relations has been extensive on th...
research
07/11/2023

Empowering Cross-lingual Behavioral Testing of NLP Models with Typological Features

A challenge towards developing NLP systems for the world's languages is ...
research
08/19/2019

Polly Want a Cracker: Analyzing Performance of Parroting on Paraphrase Generation Datasets

Paraphrase generation is an interesting and challenging NLP task which h...

Please sign up or login with your details

Forgot password? Click here to reset