CaT: Balanced Continual Graph Learning with Graph Condensation

09/18/2023
by   Yilun Liu, et al.
0

Continual graph learning (CGL) is purposed to continuously update a graph model with graph data being fed in a streaming manner. Since the model easily forgets previously learned knowledge when training with new-coming data, the catastrophic forgetting problem has been the major focus in CGL. Recent replay-based methods intend to solve this problem by updating the model using both (1) the entire new-coming data and (2) a sampling-based memory bank that stores replayed graphs to approximate the distribution of historical data. After updating the model, a new replayed graph sampled from the incoming graph will be added to the existing memory bank. Despite these methods are intuitive and effective for the CGL, two issues are identified in this paper. Firstly, most sampling-based methods struggle to fully capture the historical distribution when the storage budget is tight. Secondly, a significant data imbalance exists in terms of the scales of the complex new-coming graph data and the lightweight memory bank, resulting in unbalanced training. To solve these issues, a Condense and Train (CaT) framework is proposed in this paper. Prior to each model update, the new-coming graph is condensed to a small yet informative synthesised replayed graph, which is then stored in a Condensed Graph Memory with historical replay graphs. In the continual learning phase, a Training in Memory scheme is used to update the model directly with the Condensed Graph Memory rather than the whole new-coming graph, which alleviates the data imbalance problem. Extensive experiments conducted on four benchmark datasets successfully demonstrate superior performances of the proposed CaT framework in terms of effectiveness and efficiency. The code has been released on https://github.com/superallen13/CaT-CGL.

READ FULL TEXT

page 1

page 3

page 7

page 9

research
04/10/2023

PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning

Online class-incremental continual learning is a specific task of contin...
research
05/26/2023

Summarizing Stream Data for Memory-Restricted Online Continual Learning

Replay-based methods have proved their effectiveness on online continual...
research
08/07/2023

AdaER: An Adaptive Experience Replay Approach for Continual Lifelong Learning

Continual lifelong learning is an machine learning framework inspired by...
research
07/23/2020

ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation

Session-based recommendation has received growing attention recently due...
research
07/04/2022

It's all About Consistency: A Study on Memory Composition for Replay-Based Methods in Continual Learning

Continual Learning methods strive to mitigate Catastrophic Forgetting (C...
research
08/16/2023

Graph Relation Aware Continual Learning

Continual graph learning (CGL) studies the problem of learning from an i...
research
09/08/2023

UER: A Heuristic Bias Addressing Approach for Online Continual Learning

Online continual learning aims to continuously train neural networks fro...

Please sign up or login with your details

Forgot password? Click here to reset