Modifying Memories in Transformer Models

12/01/2020
by   Chen Zhu, et al.
0

Large Transformer models have achieved impressive performance in many natural language tasks. In particular, Transformer based language models have been shown to have great capabilities in encoding factual knowledge in their vast amount of parameters. While the tasks of improving the memorization and generalization of Transformers have been widely studied, it is not well known how to make transformers forget specific old facts and memorize new ones. In this paper, we propose a new task of explicitly modifying specific factual knowledge in Transformer models while ensuring the model performance does not degrade on the unmodified facts. This task is useful in many scenarios, such as updating stale knowledge, protecting privacy, and eliminating unintended biases stored in the models. We benchmarked several approaches that provide natural baseline performances on this task. This leads to the discovery of key components of a Transformer model that are especially effective for knowledge modifications. The work also provides insights into the role that different training phases (such as pretraining and fine-tuning) play towards memorization and knowledge modification.

READ FULL TEXT
research
10/26/2021

Hierarchical Transformers Are More Efficient Language Models

Transformer models yield impressive results on many NLP and sequence mod...
research
09/07/2023

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

Despite their impressive capabilities, large language models (LLMs) are ...
research
06/01/2023

Birth of a Transformer: A Memory Viewpoint

Large language models based on transformers have achieved great empirica...
research
10/19/2022

Revision Transformers: Getting RiT of No-Nos

Current transformer language models (LM) are large-scale models with bil...
research
06/09/2023

Measuring and Modifying Factual Knowledge in Large Language Models

Large Language Models (LLMs) store an extensive amount of factual knowle...
research
04/12/2022

What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured

Transformer-based models are now predominant in NLP. They outperform app...
research
06/09/2022

Unveiling Transformers with LEGO: a synthetic reasoning task

We propose a synthetic task, LEGO (Learning Equality and Group Operation...

Please sign up or login with your details

Forgot password? Click here to reset