QASem Parsing: Text-to-text Modeling of QA-based Semantics

05/23/2022
by   Ayal Klein, et al.
5

Several recent works have suggested to represent semantic relations with questions and answers, decomposing textual information into separate interrogative natural language statements. In this paper, we consider three QA-based semantic tasks - namely, QA-SRL, QANom and QADiscourse, each targeting a certain type of predication - and propose to regard them as jointly providing a comprehensive representation of textual information. To promote this goal, we investigate how to best utilize the power of sequence-to-sequence (seq2seq) pre-trained language models, within the unique setup of semi-structured outputs, consisting of an unordered set of question-answer pairs. We examine different input and output linearization strategies, and assess the effect of multitask learning and of simple data augmentation techniques in the setting of imbalanced training data. Consequently, we release the first unified QASem parsing tool, practical for downstream applications who can benefit from an explicit, QA-based account of information units in a text.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2021

Does QA-based intermediate training help fine-tuning language models for text classification?

Fine-tuning pre-trained language models for downstream tasks has become ...
research
12/31/2020

Using Natural Language Relations between Answer Choices for Machine Comprehension

When evaluating an answer choice for Reading Comprehension task, other a...
research
09/01/2020

Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models

A common approach to solve complex tasks is by breaking them down into s...
research
05/17/2020

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Recent years have witnessed the burgeoning of pretrained language models...
research
04/07/2022

Parameter-Efficient Abstractive Question Answering over Tables or Text

A long-term ambition of information seeking QA systems is to reason over...
research
05/06/2021

Learning to Perturb Word Embeddings for Out-of-distribution QA

QA models based on pretrained language mod-els have achieved remarkable ...
research
06/29/2023

Unified Language Representation for Question Answering over Text, Tables, and Images

When trying to answer complex questions, people often rely on multiple s...

Please sign up or login with your details

Forgot password? Click here to reset