Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching

04/26/2020
by   Liu Yang, et al.
0

Many information retrieval and natural language processing problems can be formalized as a semantic matching task. However, the existing work in this area has been focused in large part on the matching between short texts like finding answer spans, sentences and passages given a query or a natural language question. Semantic matching between long-form texts like documents, which can be applied to applications such as document clustering, news recommendation and related article recommendation, is relatively less explored and needs more research effort. In recent years, self-attention based models like Transformers and BERT have achieved state-of-the-art performance in several natural language understanding tasks. These kinds of models, however, are still restricted to short text sequences like sentences due to the quadratic computational time and space complexity of self-attention with respect to the input sequence length. In this paper, we address these issues by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for document representation learning and matching, which contains several novel design choices to adapt self-attention models for long text inputs. For model pre-training, we propose the masked sentence block language modeling task in addition to the original masked word language modeling task used in BERT, to capture sentence block relations within a document. The experimental results on several benchmark data sets for long-form document matching show that our proposed SMITH model outperforms the previous state-of-the-art Siamese matching models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT for long-form document matching, and increases the maximum input text length from 512 to 2048 when compared with BERT-based baseline methods.

READ FULL TEXT
research
11/18/2021

The Power of Selecting Key Blocks with Local Pre-ranking for Long Document Information Retrieval

On a wide range of natural language processing and information retrieval...
research
01/16/2021

Match-Ignition: Plugging PageRank into Transformer for Long-form Text Matching

Semantic text matching models have been widely used in community questio...
research
02/07/2023

Transformer-based Models for Long-Form Document Matching: Challenges and Empirical Analysis

Recent advances in the area of long document matching have primarily foc...
research
07/10/2020

BISON:BM25-weighted Self-Attention Framework for Multi-Fields Document Search

Recent breakthrough in natural language processing has advanced the info...
research
06/18/2020

I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths

Self-attention has emerged as a vital component of state-of-the-art sequ...
research
01/21/2022

Recurrent Neural Networks with Mixed Hierarchical Structures and EM Algorithm for Natural Language Processing

How to obtain hierarchical representations with an increasing level of a...
research
01/18/2022

Hierarchical Neural Network Approaches for Long Document Classification

Text classification algorithms investigate the intricate relationships b...

Please sign up or login with your details

Forgot password? Click here to reset