History Repeats: Overcoming Catastrophic Forgetting For Event-Centric Temporal Knowledge Graph Completion

05/30/2023
by   Mehrnoosh Mirtaheri, et al.
4

Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in real-world scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used event-centric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2021

TIE: A Framework for Embedding-based Incremental Temporal Knowledge Graph Completion

Reasoning in a temporal knowledge graph (TKG) is a critical task for inf...
research
08/07/2023

AdaER: An Adaptive Experience Replay Approach for Continual Lifelong Learning

Continual lifelong learning is an machine learning framework inspired by...
research
05/06/2023

On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code

Pre-trained language models (PLMs) have become a prevalent technique in ...
research
11/11/2021

Lifelong Learning from Event-based Data

Lifelong learning is a long-standing aim for artificial agents that act ...
research
03/22/2020

Continual Graph Learning

Graph Neural Networks (GNNs) have recently received significant research...
research
03/11/2019

Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay

Despite huge success, deep networks are unable to learn effectively in s...
research
08/24/2022

Lifelong Learning for Neural powered Mixed Integer Programming

Mixed Integer programs (MIPs) are typically solved by the Branch-and-Bou...

Please sign up or login with your details

Forgot password? Click here to reset