Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations

10/15/2022
by   Haifeng Li, et al.
0

The pretasks are mainly built on mutual information estimation, which requires data augmentation to construct positive samples with similar semantics to learn invariant signals and negative samples with dissimilar semantics in order to empower representation discriminability. However, an appropriate data augmentation configuration depends heavily on lots of empirical trials such as choosing the compositions of data augmentation techniques and the corresponding hyperparameter settings. We propose an augmentation-free graph contrastive learning method, invariant-discriminative graph contrastive learning (iGCL), that does not intrinsically require negative samples. iGCL designs the invariant-discriminative loss (ID loss) to learn invariant and discriminative representations. On the one hand, ID loss learns invariant signals by directly minimizing the mean square error between the target samples and positive samples in the representation space. On the other hand, ID loss ensures that the representations are discriminative by an orthonormal constraint forcing the different dimensions of representations to be independent of each other. This prevents representations from collapsing to a point or subspace. Our theoretical analysis explains the effectiveness of ID loss from the perspectives of the redundancy reduction criterion, canonical correlation analysis, and information bottleneck principle. The experimental results demonstrate that iGCL outperforms all baselines on 5 node classification benchmark datasets. iGCL also shows superior performance for different label ratios and is capable of resisting graph attacks, which indicates that iGCL has excellent generalization and robustness.

READ FULL TEXT
research
02/14/2022

Adversarial Graph Contrastive Learning with Information Regularization

Contrastive learning is an effective unsupervised method in graph repres...
research
02/05/2023

Adversarial Learning Data Augmentation for Graph Contrastive Learning in Recommendation

Recently, Graph Neural Networks (GNNs) achieve remarkable success in Rec...
research
06/23/2021

From Canonical Correlation Analysis to Self-supervised Graph Neural Networks

We introduce a conceptually simple yet effective model for self-supervis...
research
05/19/2022

Label-invariant Augmentation for Semi-Supervised Graph Classification

Recently, contrastiveness-based augmentation surges a new climax in the ...
research
04/20/2023

ID-MixGCL: Identity Mixup for Graph Contrastive Learning

Recently developed graph contrastive learning (GCL) approaches compare t...
research
02/16/2021

Learning Invariant Representations using Inverse Contrastive Loss

Learning invariant representations is a critical first step in a number ...
research
12/21/2020

Social NCE: Contrastive Learning of Socially-aware Motion Representations

Learning socially-aware motion representations is at the core of recent ...

Please sign up or login with your details

Forgot password? Click here to reset