DeepAI AI Chat
Log In Sign Up

QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions

09/26/2021
by   Daniela Brook Weiss, et al.
Bar-Ilan University
0

Multi-text applications, such as multi-document summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.

READ FULL TEXT
05/07/2020

FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization

Neural abstractive summarization models are prone to generate content in...
05/14/2018

Large-Scale QA-SRL Parsing

We present a new large-scale corpus of Question-Answer driven Semantic R...
11/16/2017

Crowdsourcing Question-Answer Meaning Representations

We introduce Question-Answer Meaning Representations (QAMRs), which repr...
05/28/2020

Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs

One of the most crucial challenges in questionanswering (QA) is the scar...
06/25/2022

Evaluation of Semantic Answer Similarity Metrics

There are several issues with the existing general machine translation o...
10/24/2020

ReadOnce Transformers: Reusable Representations of Text for Transformers

While large-scale language models are extremely effective when directly ...
05/10/2021

Poolingformer: Long Document Modeling with Pooling Attention

In this paper, we introduce a two-level attention schema, Poolingformer,...