Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

06/16/2022
by   Sihang Li, et al.
0

Leading graph contrastive learning (GCL) methods perform graph augmentations in two fashions: (1) randomly corrupting the anchor graph, which could cause the loss of semantic information, or (2) using domain knowledge to maintain salient features, which undermines the generalization to other domains. Taking an invariance look at GCL, we argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination. To this end, we relate GCL with invariant rationale discovery, and propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL). Specifically, without supervision signals, RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning. This rationale-aware pre-training scheme endows the backbone model with the powerful representation ability, further facilitating the fine-tuning on downstream tasks. On MNIST-Superpixel and MUTAG datasets, visual inspections on the discovered rationales showcase that the rationale generator successfully captures the salient features (i.e. distinguishing semantic nodes in graphs). On biochemical molecule and social network benchmark datasets, the state-of-the-art performance of RGCL demonstrates the effectiveness of rationale-aware views for contrastive learning. Our codes are available at https://github.com/lsh0520/RGCL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2023

SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning

In contrastive learning, the choice of “view” controls the information t...
research
01/04/2022

Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations

Self-supervision is recently surging at its new frontier of graph learni...
research
11/26/2021

Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning

The goal of contrastive learning based pre-training is to leverage large...
research
11/18/2022

Contrastive Knowledge Graph Error Detection

Knowledge Graph (KG) errors introduce non-negligible noise, severely aff...
research
08/13/2020

What Should Not Be Contrastive in Contrastive Learning

Recent self-supervised contrastive methods have been able to produce imp...
research
03/24/2022

GraphCoCo: Graph Complementary Contrastive Learning

Graph Contrastive Learning (GCL) has shown promising performance in grap...
research
11/20/2022

Can Single-Pass Contrastive Learning Work for Both Homophilic and Heterophilic Graph?

Existing graph contrastive learning (GCL) typically requires two forward...

Please sign up or login with your details

Forgot password? Click here to reset