Multilevel Text Alignment with Cross-Document Attention

10/03/2020
by   Xuhui Zhou, et al.
0

Text alignment finds application in tasks such as citation recommendation and plagiarism detection. Existing alignment methods operate at a single, predefined level and cannot learn to align texts at, for example, sentence and document levels. We propose a new learning approach that equips previously established hierarchical attention encoders for representing documents with a cross-document attention component, enabling structural comparisons across different levels (document-to-document and sentence-to-document). Our component is weakly supervised from document pairs and can align at multiple levels. Our evaluation on predicting document-to-document relationships and sentence-to-document relationships on the tasks of citation recommendation and plagiarism detection shows that our approach outperforms previously established hierarchical, attention encoders based on recurrent and transformer contextualization that are unaware of structural correspondence between documents.

READ FULL TEXT
01/02/2021

Cross-Document Language Modeling

We introduce a new pretraining approach for language models that are gea...
06/02/2021

Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling

Transformer is important for text modeling. However, it has difficulty i...
04/15/2020

Document-level Representation Learning using Citation-informed Transformers

Representation learning is a critical ingredient for natural language pr...
04/15/2020

SPECTER: Document-level Representation Learning using Citation-informed Transformers

Representation learning is a critical ingredient for natural language pr...
04/30/2020

Exploiting Sentence Order in Document Alignment

In this work, we exploit the simple idea that a document and its transla...
12/02/2021

Local Citation Recommendation with Hierarchical-Attention Text Encoder and SciBERT-based Reranking

The goal of local citation recommendation is to recommend a missing refe...
02/26/2019

Structure Tree-LSTM: Structure-aware Attentional Document Encoders

We propose a method to create document representations that reflect thei...