Incidental Supervision from Question-Answering Signals

09/01/2019
by   Hangfeng He, et al.
12

Human annotations are costly for many natural language processing (NLP) tasks, especially for those requiring NLP expertise. One promising solution is to use natural language to annotate natural language. However, it remains an open problem how to get supervision signals or learn representations from natural language annotations. This paper studies the case where the annotations are in the format of question-answering (QA) and proposes an effective way to learn useful representations for other tasks. We also find that the representation retrieved from question-answer meaning representation (QAMR) data can almost universally improve on a wide range of tasks, suggesting that such kind of natural language annotations indeed provide unique information on top of modern language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

Modern Question Answering Datasets and Benchmarks: A Survey

Question Answering (QA) is one of the most important natural language pr...
research
05/24/2023

You Are What You Annotate: Towards Better Models through Annotator Representations

Annotator disagreement is ubiquitous in natural language processing (NLP...
research
02/09/2023

AI-based Question Answering Assistance for Analyzing Natural-language Requirements

By virtue of being prevalently written in natural language (NL), require...
research
02/19/2023

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

We introduce a method to measure uncertainty in large language models. F...
research
10/13/2016

Mapping Between fMRI Responses to Movies and their Natural Language Annotations

Several research groups have shown how to correlate fMRI responses to th...
research
06/01/2021

Parameter-Efficient Neural Question Answering Models via Graph-Enriched Document Representations

As the computational footprint of modern NLP systems grows, it becomes i...
research
04/20/2018

Right Answer for the Wrong Reason: Discovery and Mitigation

Exposing the weaknesses of neural models is crucial for improving their ...

Please sign up or login with your details

Forgot password? Click here to reset