Deep Reinforced Query Reformulation for Information Retrieval
Query reformulations have long been a key mechanism to alleviate the vocabulary-mismatch problem in information retrieval, for example by expanding the queries with related query terms or by generating paraphrases of the queries. In this work, we propose a deep reinforced query reformulation (DRQR) model to automatically generate new reformulations of the query. To encourage the model to generate queries which can achieve high performance when performing the retrieval task, we incorporate query performance prediction into our reward function. In addition, to evaluate the quality of the reformulated query in the context of information retrieval, we first train our DRQR model, then apply the retrieval ranking model on the obtained reformulated query. Experiments are conducted on the TREC 2020 Deep Learning track MSMARCO document ranking dataset. Our results show that our proposed model outperforms several query reformulation model baselines when performing retrieval task. In addition, improvements are also observed when combining with various retrieval models, such as query expansion and BERT.
READ FULL TEXT