A Study on Token Pruning for ColBERT

12/13/2021
by   Carlos Lassance, et al.
0

The ColBERT model has recently been proposed as an effective BERT based ranker. By adopting a late interaction mechanism, a major advantage of ColBERT is that document representations can be precomputed in advance. However, the big downside of the model is the index size, which scales linearly with the number of tokens in the collection. In this paper, we study various designs for ColBERT models in order to attack this problem. While compression techniques have been explored to reduce the index size, in this paper we study token pruning techniques for ColBERT. We compare simple heuristics, as well as a single layer of attention mechanism to select the tokens to keep at indexing time. Our experiments show that ColBERT indexes can be pruned up to 30% on the MS MARCO passage collection without a significant drop in performance. Finally, we experiment on MS MARCO documents, which reveal several challenges for such mechanism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

Muti-Scale And Token Mergence: Make Your ViT More Efficient

Since its inception, Vision Transformer (ViT) has emerged as a prevalent...
research
06/26/2023

Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer Inference

Deploying pre-trained transformer models like BERT on downstream tasks i...
research
07/02/2021

Learned Token Pruning for Transformers

A major challenge in deploying transformer models is their prohibitive i...
research
06/12/2023

Revisiting Token Pruning for Object Detection and Instance Segmentation

Vision Transformers (ViTs) have shown impressive performance in computer...
research
05/24/2023

Revisiting Token Dropping Strategy in Efficient BERT Pretraining

Token dropping is a recently-proposed strategy to speed up the pretraini...
research
04/18/2023

Token Imbalance Adaptation for Radiology Report Generation

Imbalanced token distributions naturally exist in text documents, leadin...
research
06/23/2023

Max-Margin Token Selection in Attention Mechanism

Attention mechanism is a central component of the transformer architectu...

Please sign up or login with your details

Forgot password? Click here to reset