Contextualized Sparse Representation with Rectified N-Gram Attention for Open-Domain Question Answering

11/07/2019
by   Jinhyuk Lee, et al.
0

A sparse representation is known to be an effective means to encode precise lexical cues in information retrieval tasks by associating each dimension with a unique n-gram-based feature. However, it has often relied on term frequency (such as tf-idf and BM25) or hand-engineered features that are coarse-grained (document-level) and often task-specific, hence not easily generalizable and not appropriate for fine-grained (word or phrase-level) retrieval. In this work, we propose an effective method for learning a highly contextualized, word-level sparse representation by utilizing rectified self-attention weights on the neighboring n-grams. We kernelize the inner product space during training for memory efficiency without the explicit mapping of the large sparse vectors. We particularly focus on the application of our model to phrase retrieval problem, which has recently shown to be a promising direction for open-domain question answering (QA) and requires lexically sensitive phrase encoding. We demonstrate the effectiveness of the learned sparse representations by not only drastically improving the phrase retrieval accuracy (by more than 4 open-domain QA methods with up to x97 inference in SQuADopen and CuratedTrec.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2021

Phrase Retrieval Learns Passage Retrieval, Too

Dense retrieval methods have shown great promise over sparse retrieval m...
research
12/23/2020

Learning Dense Representations of Phrases at Scale

Open-domain question answering can be reformulated as a phrase retrieval...
research
06/13/2019

Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index

Existing open-domain question answering (QA) models are not suitable for...
research
04/20/2018

Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension

The current trend of extractive question answering (QA) heavily relies o...
research
05/23/2017

Question-Answering with Grammatically-Interpretable Representations

We introduce an architecture, the Tensor Product Recurrent Network (TPRN...
research
10/25/2022

Bridging the Training-Inference Gap for Dense Phrase Retrieval

Building dense retrievers requires a series of standard procedures, incl...
research
12/17/2021

Sparsifying Sparse Representations for Passage Retrieval by Top-k Masking

Sparse lexical representation learning has demonstrated much progress in...

Please sign up or login with your details

Forgot password? Click here to reset