Evaluating Parameter Efficient Learning for Generation

10/25/2022
by   Peng Xu, et al.
0

Parameter efficient learning methods (PERMs) have recently gained significant attention as they provide an efficient way for pre-trained language models (PLMs) to adapt to a downstream task. However, these conclusions are mostly drawn from in-domain evaluations over the full training set. In this paper, we present comparisons between PERMs and finetuning from three new perspectives: (1) the effect of sample and model size to in-domain evaluations, (2) generalization to unseen domains and new datasets, and (3) the faithfulness of generations. Our results show that for in-domain settings (a) there is a cross point of sample size for which PERMs will perform better than finetuning when training with fewer samples, and (b) larger PLMs have larger cross points. For cross-domain and cross-dataset cases, we show that (a) Adapter (Houlsby et al., 2019) performs the best amongst all the PERMs studied here, and (b) it outperforms finetuning if the task dataset is below a certain size. We also compare the faithfulness of generations and show that PERMs can achieve better faithfulness score than finetuning, especially for small training set, by as much as 6 achieve new state-of-the-art results on Xsum (Narayan et al., 2018) for all ROUGE scores (ROUGE-1 49.17, ROUGE-2 27.20, ROUGE-L 40.98).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning

Parameter-efficient methods (like Prompt or Adapters) for adapting pre-t...
research
01/31/2019

Multi-Task Deep Neural Networks for Natural Language Understanding

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for ...
research
09/28/2022

Cross-Domain Neural Entity Linking

Entity Linking is the task of matching a mention to an entity in a given...
research
04/27/2023

Cross-Domain Evaluation of POS Taggers: From Wall Street Journal to Fandom Wiki

The Wall Street Journal section of the Penn Treebank has been the de-fac...
research
03/05/2020

Adaptive Interpolatory MOR by Learning the Error Estimator in the Parameter Domain

Interpolatory methods offer a powerful framework for generating reduced-...
research
09/16/2023

On non-expandable cross-bifix-free codes

A cross-bifix-free code of length n over ℤ_q is defined as a non-empty s...
research
10/21/2019

Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets

Phenomenon-specific "adversarial" datasets have been recently designed t...

Please sign up or login with your details

Forgot password? Click here to reset