DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks

by   Bosheng Ding, et al.

Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models. In this work, to generate high quality synthetic data for low-resource tagging tasks, we propose a novel augmentation method with language models trained on the linearized labeled sentences. Our method is applicable to both supervised and semi-supervised settings. For the supervised settings, we conduct extensive experiments on named entity recognition (NER), part of speech (POS) tagging and end-to-end target based sentiment analysis (E2E-TBSA) tasks. For the semi-supervised settings, we evaluate our method on the NER task under the conditions of given unlabeled data only and unlabeled data plus a knowledge base. The results show that our method can consistently outperform the baselines, particularly when the given gold training data are less.


page 1

page 2

page 3

page 4


Local Additivity Based Data Augmentation for Semi-supervised NER

Named Entity Recognition (NER) is one of the first stages in deep langua...

PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks

This paper focuses on the Data Augmentation for low-resource Natural Lan...

Data Augmentation for Cross-Domain Named Entity Recognition

Current work in named entity recognition (NER) shows that data augmentat...

Improving Deep-learning-based Semi-supervised Audio Tagging with Mixup

Recently, semi-supervised learning (SSL) methods, in the framework of de...

Scientific Information Extraction with Semi-supervised Neural Tagging

This paper addresses the problem of extracting keyphrases from scientifi...

Semi-supervised and Transfer learning approaches for low resource sentiment classification

Sentiment classification involves quantifying the affective reaction of ...

Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding

Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as...