Socialformer: Social Network Inspired Long Document Modeling for Document Ranking

02/22/2022
by   Yujia Zhou, et al.
0

Utilizing pre-trained language models has achieved great success for neural document ranking. Limited by the computational and memory requirements, long document modeling becomes a critical issue. Recent works propose to modify the full attention matrix in Transformer by designing sparse attention patterns. However, most of them only focus on local connections of terms within a fixed-size window. How to build suitable remote connections between terms to better model document representation remains underexplored. In this paper, we propose the model Socialformer, which introduces the characteristics of social networks into designing sparse attention patterns for long document modeling in document ranking. Specifically, we consider several attention patterns to construct a graph like social networks. Endowed with the characteristic of social networks, most pairs of nodes in such a graph can reach with a short path while ensuring the sparsity. To facilitate efficient calculation, we segment the graph into multiple subgraphs to simulate friend circles in social scenarios. Experimental results confirm the effectiveness of our model on long document modeling.

READ FULL TEXT

page 4

page 8

research
10/20/2021

Contrastive Document Representation Learning with Graph Attention Networks

Recent progress in pretrained Transformer-based language models has show...
research
11/18/2021

The Power of Selecting Key Blocks with Local Pre-ranking for Long Document Information Retrieval

On a wide range of natural language processing and information retrieval...
research
10/23/2020

Long Document Ranking with Query-Directed Sparse Transformer

The computing cost of transformer self-attention often necessitates brea...
research
05/09/2022

Long Document Re-ranking with Modular Re-ranker

Long document re-ranking has been a challenging problem for neural re-ra...
research
09/02/2021

Skim-Attention: Learning to Focus via Document Layout

Transformer-based pre-training techniques of text and layout have proven...
research
11/10/2021

Attention Approximates Sparse Distributed Memory

While Attention has come to be an important mechanism in deep learning, ...
research
10/11/2022

Capturing Global Structural Information in Long Document Question Answering with Compressive Graph Selector Network

Long document question answering is a challenging task due to its demand...

Please sign up or login with your details

Forgot password? Click here to reset