Improving Multi-task Generalization Ability for Neural Text Matching via Prompt Learning

04/06/2022
by   Shicheng Xu, et al.
0

Text matching is a fundamental technique in both information retrieval and natural language processing. Text matching tasks share the same paradigm that determines the relationship between two given texts. Evidently, the relationships vary from task to task, e.g. relevance in document retrieval, semantic alignment in paraphrase identification and answerable judgment in question answering. However, the essential signals for text matching remain in a finite scope, i.e. exact matching, semantic matching, and inference matching. Recent state-of-the-art neural text matching models, e.g. pre-trained language models (PLMs), are hard to generalize to different tasks. It is because the end-to-end supervised learning on task-specific dataset makes model overemphasize the data sample bias and task-specific signals instead of the essential matching signals, which ruins the generalization of model to different tasks. To overcome this problem, we adopt a specialization-generalization training strategy and refer to it as Match-Prompt. In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens. In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks. High diverse matching tasks avoid model fitting the data sample bias on a specific task, so that model can focus on learning the essential matching signals. Meanwhile, the prompt tokens obtained in the first step are added to the corresponding tasks to help the model distinguish different task-specific matching signals. Experimental results on eighteen public datasets show that Match-Prompt can significantly improve multi-task generalization capability of PLMs in text matching, and yield better in-domain multi-task, out-of-domain multi-task and new task adaptation performance than task-specific model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2022

NIR-Prompt: A Multi-task Generalized Neural Information Retrieval Training Framework

Information retrieval aims to find information that meets users' needs f...
research
04/28/2023

A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning

Knowledge-intensive language tasks (KILTs) benefit from retrieving high-...
research
10/17/2021

Quantifying the Task-Specific Information in Text-Based Classifications

Recently, neural natural language models have attained state-of-the-art ...
research
05/18/2023

BERM: Training the Balanced and Extractable Representation for Matching to Improve Generalization Ability of Dense Retrieval

Dense retrieval has shown promise in the first-stage retrieval process w...
research
12/05/2019

12-in-1: Multi-Task Vision and Language Representation Learning

Much of vision-and-language research focuses on a small but diverse set ...
research
05/24/2019

MatchZoo: A Learning, Practicing, and Developing System for Neural Text Matching

Text matching is the core problem in many natural language processing (N...
research
05/23/2022

Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

Prompt Tuning (PT) has been largely successful as a parameter-efficient ...

Please sign up or login with your details

Forgot password? Click here to reset