DeepAI
Log In Sign Up

Brown University at TREC Deep Learning 2019

09/08/2020
by   George Zerveas, et al.
0

This paper describes Brown University's submission to the TREC 2019 Deep Learning track. We followed a 2-phase method for producing a ranking of passages for a given input query: In the the first phase, the user's query is expanded by appending 3 queries generated by a transformer model which was trained to rephrase an input query into semantically similar queries. The expanded query can exhibit greater similarity in surface form and vocabulary overlap with the passages of interest and can therefore serve as enriched input to any downstream information retrieval method. In the second phase, we use a BERT-based model pre-trained for language modeling but fine-tuned for query - document relevance prediction to compute relevance scores for a set of 1000 candidate passages per query and subsequently obtain a ranking of passages by sorting them based on the predicted relevance scores. According to the results published in the official Overview of the TREC Deep Learning Track 2019, our team ranked 3rd in the passage retrieval task (including full ranking and re-ranking), and 2nd when considering only re-ranking submissions.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/15/2020

Deep Reinforced Query Reformulation for Information Retrieval

Query reformulations have long been a key mechanism to alleviate the voc...
05/18/2022

PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking

This paper describes the PASH participation in TREC 2021 Deep Learning T...
02/14/2022

DS4DH at TREC Health Misinformation 2021: Multi-Dimensional Ranking Models with Transfer Learning and Rank Fusion

This paper describes the work of the Data Science for Digital Health (DS...
02/26/2018

A Fast Deep Learning Model for Textual Relevance in Biomedical Information Retrieval

Publications in the life sciences are characterized by a large technical...
04/26/2020

Choppy: Cut Transformer For Ranked List Truncation

Work in information retrieval has traditionally focused on ranking and r...
08/21/2020

Fine-tune BERT for E-commerce Non-Default Search Ranking

The quality of non-default ranking on e-commerce platforms, such as base...
06/23/2021

Leveraging semantically similar queries for ranking via combining representations

In modern ranking problems, different and disparate representations of t...