Random Text Perturbations Work, but not Always

09/02/2022
by   Zhengxiang Wang, et al.
0

We present three large-scale experiments on binary text matching classification task both in Chinese and English to evaluate the effectiveness and generalizability of random text perturbations as a data augmentation approach for NLP. It is found that the augmentation can bring both negative and positive effects to the test set performance of three neural classification models, depending on whether the models train on enough original training examples. This remains true no matter whether five random text editing operations, used to augment text, are applied together or separately. Our study demonstrates with strong implication that the effectiveness of random text perturbations is task specific and not generally positive.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2023

Probabilistic Linguistic Knowledge and Token-level Text Augmentation

This paper investigates the effectiveness of token-level text augmentati...
research
11/29/2021

Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching

Data augmentation (DA) is a common solution to data scarcity and imbalan...
research
07/15/2021

Tailor: Generating and Perturbing Text with Semantic Controls

Making controlled perturbations is essential for various tasks (e.g., da...
research
03/21/2019

Low Resource Text Classification with ULMFit and Backtranslation

In computer vision, virtually every state of the art deep learning syste...
research
09/01/2021

Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification

Data augmentation aims to enrich training samples for alleviating the ov...
research
11/17/2022

Back-Translation-Style Data Augmentation for Mandarin Chinese Polyphone Disambiguation

Conversion of Chinese Grapheme-to-Phoneme (G2P) plays an important role ...
research
12/12/2018

Adversarial Learning of Semantic Relevance in Text to Image Synthesis

We describe a new approach that improves the training of generative adve...

Please sign up or login with your details

Forgot password? Click here to reset