DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks

11/03/2020
by   Bosheng Ding, et al.
17

Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models. In this work, to generate high quality synthetic data for low-resource tagging tasks, we propose a novel augmentation method with language models trained on the linearized labeled sentences. Our method is applicable to both supervised and semi-supervised settings. For the supervised settings, we conduct extensive experiments on named entity recognition (NER), part of speech (POS) tagging and end-to-end target based sentiment analysis (E2E-TBSA) tasks. For the semi-supervised settings, we evaluate our method on the NER task under the conditions of given unlabeled data only and unlabeled data plus a knowledge base. The results show that our method can consistently outperform the baselines, particularly when the given gold training data are less.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/04/2020

Local Additivity Based Data Augmentation for Semi-supervised NER

Named Entity Recognition (NER) is one of the first stages in deep langua...
02/25/2022

PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks

This paper focuses on the Data Augmentation for low-resource Natural Lan...
09/04/2021

Data Augmentation for Cross-Domain Named Entity Recognition

Current work in named entity recognition (NER) shows that data augmentat...
02/16/2021

Improving Deep-learning-based Semi-supervised Audio Tagging with Mixup

Recently, semi-supervised learning (SSL) methods, in the framework of de...
08/21/2017

Scientific Information Extraction with Semi-supervised Neural Tagging

This paper addresses the problem of extracting keyphrases from scientifi...
06/07/2018

Semi-supervised and Transfer learning approaches for low resource sentiment classification

Sentiment classification involves quantifying the affective reaction of ...
09/14/2021

Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding

Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as...