Simplified TinyBERT: Knowledge Distillation for Document Retrieval

09/16/2020
by   Xuanang Chen, et al.
0

Despite the effectiveness of utilizing BERT for document ranking, the computational cost of such approaches is non-negligible when compared to other retrieval methods. To this end, this paper first empirically investigates the applications of knowledge distillation models on document ranking task. In addition, on top of the recent TinyBERT, two simplifications are proposed. Evaluation on MS MARCO document re-ranking task confirms the effectiveness of the proposed simplifications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2020

Longformer for MS MARCO Document Re-ranking Task

Two step document ranking, where the initial retrieval is done by a clas...
research
05/20/2021

Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking

An emerging recipe for achieving state-of-the-art effectiveness in neura...
research
08/20/2020

PARADE: Passage Representation Aggregation for Document Reranking

We present PARADE, an end-to-end Transformer-based model that considers ...
research
11/11/2019

Knowledge Distillation in Document Retrieval

Complex deep learning models now achieve state of the art performance fo...
research
10/22/2020

Distilling Dense Representations for Ranking using Tightly-Coupled Teachers

We present an approach to ranking with dense representations that applie...
research
12/20/2022

Fine-Grained Distillation for Long Document Retrieval

Long document retrieval aims to fetch query-relevant documents from a la...
research
12/02/2022

StructVPR: Distill Structural Knowledge with Weighting Samples for Visual Place Recognition

Visual place recognition (VPR) is usually considered as a specific image...

Please sign up or login with your details

Forgot password? Click here to reset