Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval

08/21/2022
by   Xinyu Ma, et al.
0

Pre-training and fine-tuning have achieved significant advances in the information retrieval (IR). A typical approach is to fine-tune all the parameters of large-scale pre-trained models (PTMs) on downstream tasks. As the model size and the number of tasks increase greatly, such approach becomes less feasible and prohibitively expensive. Recently, a variety of parameter-efficient tuning methods have been proposed in natural language processing (NLP) that only fine-tune a small number of parameters while still attaining strong performance. Yet there has been little effort to explore parameter-efficient tuning for IR. In this work, we first conduct a comprehensive study of existing parameter-efficient tuning methods at both the retrieval and re-ranking stages. Unlike the promising results in NLP, we find that these methods cannot achieve comparable performance to full fine-tuning at both stages when updating less than 1% of the original model parameters. More importantly, we find that the existing methods are just parameter-efficient, but not learning-efficient as they suffer from unstable training and slow convergence. To analyze the underlying reason, we conduct a theoretical analysis and show that the separation of the inserted trainable modules makes the optimization difficult. To alleviate this issue, we propose to inject additional modules alongside the PTM to make the original scattered modules connected. In this way, all the trainable modules can form a pathway to smooth the loss surface and thus help stabilize the training process. Experiments at both retrieval and re-ranking stages show that our method outperforms existing parameter-efficient methods significantly, and achieves comparable or even better performance over full fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2023

PVP: Pre-trained Visual Parameter-Efficient Tuning

Large-scale pre-trained transformers have demonstrated remarkable succes...
research
03/23/2023

Parameter-Efficient Sparse Retrievers and Rerankers using Adapters

Parameter-Efficient transfer learning with Adapters have been studied in...
research
07/14/2022

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

Prompt tuning attempts to update few task-specific parameters in pre-tra...
research
02/02/2019

Parameter-Efficient Transfer Learning for NLP

Fine-tuning large pre-trained models is an effective transfer mechanism ...
research
06/16/2023

Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions

Pre-training fine-tuning is a prevalent paradigm in computer vision ...
research
06/15/2022

Sparse Structure Search for Parameter-Efficient Tuning

Adapting large pre-trained models (PTMs) through fine-tuning imposes pro...
research
11/28/2022

On the Effectiveness of Parameter-Efficient Fine-Tuning

Fine-tuning pre-trained models has been ubiquitously proven to be effect...

Please sign up or login with your details

Forgot password? Click here to reset