Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

07/14/2022
by   Weng Lam Tam, et al.
0

Prompt tuning attempts to update few task-specific parameters in pre-trained models. It has achieved comparable performance to fine-tuning of the full parameter set on both language understanding and generation tasks. In this work, we study the problem of prompt tuning for neural text retrievers. We introduce parameter-efficient prompt tuning for text retrieval across in-domain, cross-domain, and cross-topic settings. Through an extensive analysis, we show that the strategy can mitigate the two issues – parameter-inefficiency and weak generalizability – faced by fine-tuning based retrieval methods. Notably, it can significantly improve the out-of-domain zero-shot generalization of the retrieval models. By updating only 0.1 model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated. Finally, to facilitate research on retrievers' cross-topic generalizability, we curate and release an academic retrieval dataset with 18K query-results pairs in 87 topics, making it the largest topic-specific one to date.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2022

Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval

Pre-training and fine-tuning have achieved significant advances in the i...
research
04/11/2021

Fine-tuning Encoders for Improved Monolingual and Zero-shot Polylingual Neural Topic Modeling

Neural topic models can augment or replace bag-of-words inputs with the ...
research
09/15/2023

SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels

Pre-trained vision transformers have strong representation benefits to v...
research
08/29/2022

Exploring and Evaluating Personalized Models for Code Generation

Large Transformer models achieved the state-of-the-art status for Natura...
research
12/12/2022

In Defense of Cross-Encoders for Zero-Shot Retrieval

Bi-encoders and cross-encoders are widely used in many state-of-the-art ...
research
11/25/2021

Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization

Domain generalization (DG) is a difficult transfer learning problem aimi...
research
08/10/2022

Reducing Retraining by Recycling Parameter-Efficient Prompts

Parameter-efficient methods are able to use a single frozen pre-trained ...

Please sign up or login with your details

Forgot password? Click here to reset