Adaptable Text Matching via Meta-Weight Regulator

04/27/2022
by   Bo Zhang, et al.
0

Neural text matching models have been used in a range of applications such as question answering and natural language inference, and have yielded a good performance. However, these neural models are of a limited adaptability, resulting in a decline in performance when encountering test examples from a different dataset or even a different task. The adaptability is particularly important in the few-shot setting: in many cases, there is only a limited amount of labeled data available for a target dataset or task, while we may have access to a richly labeled source dataset or task. However, adapting a model trained on the abundant source data to a few-shot target dataset or task is challenging. To tackle this challenge, we propose a Meta-Weight Regulator (MWR), which is a meta-learning approach that learns to assign weights to the source examples based on their relevance to the target loss. Specifically, MWR first trains the model on the uniformly weighted source examples, and measures the efficacy of the model on the target examples via a loss function. By iteratively performing a (meta) gradient descent, high-order gradients are propagated to the source examples. These gradients are then used to update the weights of source examples, in a way that is relevant to the target performance. As MWR is model-agnostic, it can be applied to any backbone neural model. Extensive experiments are conducted with various backbone text matching models, on four widely used datasets and two tasks. The results demonstrate that our proposed approach significantly outperforms a number of existing adaptation methods and effectively improves the cross-dataset and cross-task adaptability of the neural text matching models in the few-shot setting.

READ FULL TEXT
research
03/20/2020

Weighted Meta-Learning

Meta-learning leverages related source tasks to learn an initialization ...
research
02/16/2023

Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

Prompt tuning (PT) which only tunes the embeddings of an additional sequ...
research
04/11/2019

MxML: Mixture of Meta-Learners for Few-Shot Classification

A meta-model is trained on a distribution of similar tasks such that it ...
research
10/29/2021

MetaICL: Learning to Learn In Context

We introduce MetaICL (Meta-training for In-Context Learning), a new meta...
research
10/28/2019

HIDRA: Head Initialization across Dynamic targets for Robust Architectures

The performance of gradient-based optimization strategies depends heavil...
research
05/12/2022

Cross-domain Few-shot Meta-learning Using Stacking

Cross-domain few-shot meta-learning (CDFSML) addresses learning problems...
research
02/09/2020

GradMix: Multi-source Transfer across Domains and Tasks

The computer vision community is witnessing an unprecedented rate of new...

Please sign up or login with your details

Forgot password? Click here to reset