Learning to Annotate: Modularizing Data Augmentation for TextClassifiers with Natural Language Explanations

11/04/2019
by   Ziqi Wang, et al.
0

Deep neural networks usually require massive labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional. (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural EXecution Tree (NEXT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NEXT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
05/02/2020

Teaching Machine Comprehension with Compositional Explanations

Advances in extractive machine reading comprehension (MRC) rely heavily ...
research
05/05/2020

ExpBERT: Representation Engineering with Natural Language Explanations

Suppose we want to specify the inductive bias that married couples typic...
research
05/24/2023

Using Natural Language Explanations to Rescale Human Judgments

The rise of large language models (LLMs) has brought a critical need for...
research
11/07/2022

Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

Humans can classify an unseen category by reasoning on its language expl...
research
05/24/2023

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

Previous research has demonstrated that natural language explanations pr...
research
09/16/2021

Detection Accuracy for Evaluating Compositional Explanations of Units

The recent success of deep learning models in solving complex problems a...

Please sign up or login with your details

Forgot password? Click here to reset