A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks

05/30/2023
by   Jintang Li, et al.
0

While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of high task accuracy requires a large hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications. This paper explores a promising direction for graph contrastive learning (GCL) with spiking neural networks (SNNs), which leverage sparse and binary characteristics to learn more biologically plausible and compact representations. We propose SpikeGCL, a novel GCL framework to learn binarized 1-bit representations for graphs, making balanced trade-offs between efficiency and performance. We provide theoretical guarantees to demonstrate that SpikeGCL has comparable expressiveness with its full-precision counterparts. Experimental results demonstrate that, with nearly 32x representation storage compression, SpikeGCL is either comparable to or outperforms many fancy state-of-the-art supervised and self-supervised methods across several graph benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2020

GraphCL: Contrastive Self-Supervised Learning of Graph Representations

We propose Graph Contrastive Learning (GraphCL), a general framework for...
research
05/23/2023

Temporal Contrastive Learning for Spiking Neural Networks

Biologically inspired spiking neural networks (SNNs) have garnered consi...
research
07/20/2021

Group Contrastive Self-Supervised Learning on Graphs

We study self-supervised learning on graphs using contrastive methods. A...
research
06/26/2022

Latent Augmentation For Better Graph Self-Supervised Learning

Graph self-supervised learning has been vastly employed to learn represe...
research
02/16/2022

Self-Supervised Representation Learning via Latent Graph Prediction

Self-supervised learning (SSL) of graph neural networks is emerging as a...
research
02/17/2021

S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration

Previous studies dominantly target at self-supervised learning on real-v...
research
06/01/2023

Auto-Spikformer: Spikformer Architecture Search

The integration of self-attention mechanisms into Spiking Neural Network...

Please sign up or login with your details

Forgot password? Click here to reset