Disentangle-based Continual Graph Representation Learning

10/06/2020
by   Xiaoyu Kou, et al.
0

Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data. However, existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data. To address this issue, we study the problem of continual graph representation learning which aims to continually train a GE model on new data to learn incessantly emerging multi-relational data while avoiding catastrophically forgetting old learned knowledge. Moreover, we propose a disentangle-based continual graph representation learning (DiCGRL) framework inspired by the human's ability to learn procedural knowledge. The experimental results show that DiCGRL could effectively alleviate the catastrophic forgetting problem and outperform state-of-the-art continual learning models. The code and datasets are released on https://github.com/KXY-PUBLIC/DiCGRL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2021

Does Continual Learning = Catastrophic Forgetting?

Continual learning is known for suffering from catastrophic forgetting, ...
research
05/23/2023

Continual Learning on Dynamic Graphs via Parameter Isolation

Many real-world graph learning tasks require handling dynamic graphs whe...
research
03/03/2022

Provable and Efficient Continual Representation Learning

In continual learning (CL), the goal is to design models that can learn ...
research
11/30/2021

Hierarchical Prototype Networks for Continual Graph Representation Learning

Despite significant advances in graph representation learning, little at...
research
06/01/2023

Task Relation-aware Continual User Representation Learning

User modeling, which learns to represent users into a low-dimensional re...
research
03/04/2022

Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation

Existing continual relation learning (CRL) methods rely on plenty of lab...
research
09/18/2023

DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues

The malicious use and widespread dissemination of deepfake pose a signif...

Please sign up or login with your details

Forgot password? Click here to reset