Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

05/30/2023
by   Guande He, et al.
0

Large pre-trained language models (PLMs) have demonstrated strong performance on natural language understanding (NLU) tasks through fine-tuning. However, fine-tuned models still suffer from overconfident predictions, especially in out-of-domain settings. In this paper, we tackle the problem of calibrating fine-tuned language models. We demonstrate that the PLMs are well-calibrated on the masked language modeling task with robust predictive confidence under domain shift, yet the fine-tuned models fail to retain such property due to catastrophic forgetting, which impacts the calibration on the downstream classification task. In light of these observations, we evaluate the calibration of several methods that preserve pre-trained features and show that preserving pre-trained features can improve the calibration of fine-tuned language models. Among these methods, our proposed method that encourages the fine-tuned model to learn generative representations with auxiliary language modeling objective achieves competitive accuracy and the lowest expected calibration error compared to several strong baselines under both in-domain and out-of-domain settings on three downstream NLU tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2022

Watermarking Pre-trained Language Models with Backdooring

Large pre-trained language models (PLMs) have proven to be a crucial com...
research
02/13/2023

Task-Specific Skill Localization in Fine-tuned Language Models

Pre-trained language models can be fine-tuned to solve diverse NLP tasks...
research
06/03/2021

Fingerprinting Fine-tuned Language Models in the Wild

There are concerns that the ability of language models (LMs) to generate...
research
09/03/2020

Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models

Advances in language modeling have led to the development of deep attent...
research
07/01/2023

THUIR2 at NTCIR-16 Session Search (SS) Task

Our team(THUIR2) participated in both FOSS and POSS subtasks of the NTCI...
research
08/25/2023

Fine-tuning can cripple your foundation model; preserving features may be the solution

Pre-trained foundation models, owing primarily to their enormous capacit...
research
06/05/2023

Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

Using Japanese honorifics is challenging because it requires not only kn...

Please sign up or login with your details

Forgot password? Click here to reset