DSReg: Using Distant Supervision as a Regularizer

05/28/2019
by   Yuxian Meng, et al.
0

In this paper, we aim at tackling a general issue in NLP tasks where some of the negative examples are highly similar to the positive examples, i.e., hard-negative examples. We propose the distant supervision as a regularizer (DSReg) approach to tackle this issue. The original task is converted to a multi-task learning problem, in which distant supervision is used to retrieve hard-negative examples. The obtained hard-negative examples are then used as a regularizer. The original target objective of distinguishing positive examples from negative examples is jointly optimized with the auxiliary task objective of distinguishing softened positive (i.e., hard-negative examples plus positive examples) from easy-negative examples. In the neural context, this can be done by outputting the same representation from the last neural layer to different softmax functions. Using this strategy, we can improve the performance of baseline models in a range of different NLP tasks, including text classification, sequence labeling and reading comprehension.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2019

Dice Loss for Data-imbalanced NLP Tasks

Many NLP tasks such as tagging and machine reading comprehension are fac...
research
06/08/2021

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Adversarial training (AT) as a regularization method has proved its effe...
research
05/10/2017

A Survey of Distant Supervision Methods using PGMs

Relation Extraction refers to the task of populating a database with tup...
research
11/09/2019

Improving Machine Reading Comprehension via Adversarial Training

Adversarial training (AT) as a regularization method has proved its effe...
research
11/27/2019

Label Dependent Deep Variational Paraphrase Generation

Generating paraphrases that are lexically similar but semantically diffe...
research
12/16/2021

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

Retrieval-augmented generation models have shown state-of-the-art perfor...
research
04/30/2020

Paraphrasing vs Coreferring: Two Sides of the Same Coin

We study the potential synergy between two different NLP tasks, both con...

Please sign up or login with your details

Forgot password? Click here to reset