Task-Specific Skill Localization in Fine-tuned Language Models

02/13/2023
by   Abhishek Panigrahi, et al.
0

Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-shot settings. Thus fine-tuning allows the model to quickly pick up task-specific “skills,” but there has been limited study of where these newly-learnt skills reside inside the massive model. This paper introduces the term skill localization for this problem and proposes a solution. Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters (∼0.01 performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model. While reminiscent of recent works on parameter-efficient fine-tuning, the novel aspects here are that: (i) No further re-training is needed on the subset (unlike, say, with lottery tickets). (ii) Notable improvements are seen over vanilla fine-tuning with respect to calibration of predictions in-distribution (40-90 of predictions out-of-distribution (OOD). In models trained on multiple tasks, a stronger notion of skill localization is observed, where the sparse regions corresponding to different tasks are almost disjoint, and their overlap (when it happens) is a proxy for task similarity. Experiments suggest that localization via grafting can assist certain forms of continual learning.

READ FULL TEXT

page 9

page 15

page 16

research
05/30/2023

Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

Large pre-trained language models (PLMs) have demonstrated strong perfor...
research
10/23/2022

On the Transformation of Latent Space in Fine-Tuned NLP Models

We study the evolution of latent space in fine-tuned NLP models. Differe...
research
03/08/2018

Reptile: a Scalable Metalearning Algorithm

This paper considers metalearning problems, where there is a distributio...
research
08/07/2023

WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset

A fundamental challenge in the current NLP context, dominated by languag...
research
02/02/2022

Toward a traceable, explainable, and fairJD/Resume recommendation system

In the last few decades, companies are interested to adopt an online aut...
research
07/09/2023

Assessing the efficacy of large language models in generating accurate teacher responses

(Tack et al., 2023) organized the shared task hosted by the 18th Worksho...
research
06/05/2023

Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

Using Japanese honorifics is challenging because it requires not only kn...

Please sign up or login with your details

Forgot password? Click here to reset