Can You Label Less by Using Out-of-Domain Data? Active Transfer Learning with Few-shot Instructions

11/21/2022
by   Rafal Kocielnik, et al.
32

Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5 compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26 that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset