Self Information Update for Large Language Models through Mitigating Exposure Bias

05/29/2023
by   Pengfei Yu, et al.
0

Current LLMs have demonstrated remarkable capabilities in addressing users' requests for various types of information. However, these models are limited by the most recent data available in their pretraining corpora, rendering them incapable of providing up-to-date information. Retraining LLMs from scratch is cost-prohibitive, and the effectiveness of continual fine-tuning on new corpora has not been thoroughly examined. Additionally, current update procedures typically demand significant human input to prepare the information into more structured format, such as knowledge triples, conversational data or responses with human feedback. In this study, we conduct a comprehensive examination of a novel self information update task in LLMs, which only requires the provision of informative text corpora. For instance, we can use the latest news articles to update the LLMs' existing knowledge. We define the self information update task and assess the continual fine-tuning approach for this purpose. We observe that the naive method of continual fine-tuning can be problematic due to LLMs' exposure bias, which prioritizes existing information over new information we aim to integrate and leads to incorrect reasoning chains that ultimately diminish the efficacy of information updates. Based on our analysis, we propose an effective method to mitigate exposure bias by incorporating the selection of relevant facts into training losses. Furthermore, we develop a dataset to evaluate information updates, derived from news articles published after March 2023. Experimental results demonstrate that our proposed approach significantly increases the factual consistency score (0 to 1) by 0.16 while having minimal impact on performance for instructions not directly related to the new information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2023

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

Catastrophic forgetting (CF) is a phenomenon that occurs in machine lear...
research
05/28/2020

Language Models are Few-Shot Learners

Recent work has demonstrated substantial gains on many NLP tasks and ben...
research
06/08/2023

Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training

We consider the task of few-shot intent detection, which involves traini...
research
11/01/2019

On the Unintended Social Bias of Training Language Generation Models with Data from Local Media

There are concerns that neural language models may preserve some of the ...
research
04/08/2022

Fair and Argumentative Language Modeling for Computational Argumentation

Although much work in NLP has focused on measuring and mitigating stereo...
research
08/07/2021

Fine-tuning GPT-3 for Russian Text Summarization

Automatic summarization techniques aim to shorten and generalize informa...
research
10/23/2020

Overcoming Conflicting Data for Model Updates

In this paper, we explore how to use a small amount of new data to updat...

Please sign up or login with your details

Forgot password? Click here to reset