Paragraph-based Transformer Pre-training for Multi-Sentence Inference

05/02/2022
by   Luca Di Liello, et al.
0

Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences jointly. In this paper, we first show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/20/2022

Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection

An important task for designing QA systems is answer sentence selection ...
01/25/2022

Do Transformers Encode a Foundational Ontology? Probing Abstract Classes in Natural Language

With the methodological support of probing (or diagnostic classification...
05/17/2022

Efficient Unsupervised Sentence Compression by Fine-tuning Transformers with Reinforcement Learning

Sentence compression reduces the length of text by removing non-essentia...
05/25/2020

Pointwise Paraphrase Appraisal is Potentially Problematic

The prevailing approach for training and evaluating paraphrase identific...
03/09/2022

BinMLM: Binary Authorship Verification with Flow-aware Mixture-of-Shared Language Model

Binary authorship analysis is a significant problem in many software eng...
03/17/2022

DP-KB: Data Programming with Knowledge Bases Improves Transformer Fine Tuning for Answer Sentence Selection

While transformers demonstrate impressive performance on many knowledge ...
01/25/2021

Randomized Deep Structured Prediction for Discourse-Level Processing

Expressive text encoders such as RNNs and Transformer Networks have been...