Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations

01/04/2022
by   Yuning You, et al.
5

Self-supervision is recently surging at its new frontier of graph learning. It facilitates graph representations beneficial to downstream tasks; but its success could hinge on domain knowledge for handcraft or the often expensive trials and errors. Even its state-of-the-art representative, graph contrastive learning (GraphCL), is not completely free of those needs as GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations. Our work aims at advancing GraphCL by answering the following questions: How to represent the space of graph augmented views? What principle can be relied upon to learn a prior in that space? And what framework can be constructed to learn the prior in tandem with contrastive learning? Accordingly, we have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators, assuming that graph priors per se, similar to the concept of image manifolds, can be learned by data generation. Furthermore, to form contrastive views without collapsing to trivial solutions due to the prior learnability, we have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors. Eventually, contrastive learning, InfoMin, and InfoBN are incorporated organically into one framework of bi-level optimization. Our principled and automated approach has proven to be competitive against the state-of-the-art graph self-supervision methods, including GraphCL, on benchmarks of small graphs; and shown even better generalizability on large-scale graphs, without resorting to human expertise or downstream validation. Our code is publicly released at https://github.com/Shen-Lab/GraphCL_Automated.

READ FULL TEXT
research
03/24/2023

Hybrid Augmented Automated Graph Contrastive Learning

Graph augmentations are essential for graph contrastive learning. Most e...
research
12/14/2022

MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning

Contrastive learning (CL), which can extract the information shared betw...
research
06/16/2022

Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Leading graph contrastive learning (GCL) methods perform graph augmentat...
research
06/10/2022

From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for Improving Generalization with Few Labels

The lack of generalizability of deep learning approaches for the automat...
research
03/25/2022

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

Recently, contrastive learning has risen to be a promising approach for ...
research
10/07/2022

Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative

This paper targets at improving the generalizability of hypergraph neura...
research
06/13/2019

Contrastive Multiview Coding

Humans view the world through many sensory channels, e.g., the long-wave...

Please sign up or login with your details

Forgot password? Click here to reset