Contextualized Sparse Representation with Rectified N-Gram Attention for Open-Domain Question Answering

by   Jinhyuk Lee, et al.
Korea University
University of Washington

A sparse representation is known to be an effective means to encode precise lexical cues in information retrieval tasks by associating each dimension with a unique n-gram-based feature. However, it has often relied on term frequency (such as tf-idf and BM25) or hand-engineered features that are coarse-grained (document-level) and often task-specific, hence not easily generalizable and not appropriate for fine-grained (word or phrase-level) retrieval. In this work, we propose an effective method for learning a highly contextualized, word-level sparse representation by utilizing rectified self-attention weights on the neighboring n-grams. We kernelize the inner product space during training for memory efficiency without the explicit mapping of the large sparse vectors. We particularly focus on the application of our model to phrase retrieval problem, which has recently shown to be a promising direction for open-domain question answering (QA) and requires lexically sensitive phrase encoding. We demonstrate the effectiveness of the learned sparse representations by not only drastically improving the phrase retrieval accuracy (by more than 4 open-domain QA methods with up to x97 inference in SQuADopen and CuratedTrec.


page 1

page 2

page 3

page 4


Phrase Retrieval Learns Passage Retrieval, Too

Dense retrieval methods have shown great promise over sparse retrieval m...

Learning Dense Representations of Phrases at Scale

Open-domain question answering can be reformulated as a phrase retrieval...

Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index

Existing open-domain question answering (QA) models are not suitable for...

Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension

The current trend of extractive question answering (QA) heavily relies o...

Question-Answering with Grammatically-Interpretable Representations

We introduce an architecture, the Tensor Product Recurrent Network (TPRN...

Bridging the Training-Inference Gap for Dense Phrase Retrieval

Building dense retrievers requires a series of standard procedures, incl...

Sparsifying Sparse Representations for Passage Retrieval by Top-k Masking

Sparse lexical representation learning has demonstrated much progress in...

Please sign up or login with your details

Forgot password? Click here to reset