Divergence-Based Domain Transferability for Zero-Shot Classification

02/11/2023
by   Alexander Pugantsov, et al.
0

Transferring learned patterns from pretrained neural language models has been shown to significantly improve effectiveness across a variety of language-based tasks, meanwhile further tuning on intermediate tasks has been demonstrated to provide additional performance benefits, provided the intermediate task is sufficiently related to the target task. However, how to identify related tasks is an open problem, and brute-force searching effective task combinations is prohibitively expensive. Hence, the question arises, are we able to improve the effectiveness and efficiency of tasks with no training examples through selective fine-tuning? In this paper, we explore statistical measures that approximate the divergence between domain representations as a means to estimate whether tuning using one task pair will exhibit performance benefits over tuning another. This estimation can then be used to reduce the number of task pairs that need to be tested by eliminating pairs that are unlikely to provide benefits. Through experimentation over 58 tasks and over 6,600 task pair combinations, we demonstrate that statistical measures can distinguish effective task pairs, and the resulting estimates can reduce end-to-end runtime by up to 40

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2021

Does QA-based intermediate training help fine-tuning language models for text classification?

Fine-tuning pre-trained language models for downstream tasks has become ...
research
08/26/2021

Rethinking Why Intermediate-Task Fine-Tuning Works

Supplementary Training on Intermediate Labeled-data Tasks (STILTs) is a ...
research
02/02/2022

Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions

Transfer learning approaches have shown to significantly improve perform...
research
04/16/2021

What to Pre-Train on? Efficient Intermediate Task Selection

Intermediate task fine-tuning has been shown to culminate in large trans...
research
05/01/2020

Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?

While pretrained models such as BERT have shown large gains across natur...

Please sign up or login with your details

Forgot password? Click here to reset