From Canonical Correlation Analysis to Self-supervised Graph Neural Networks

06/23/2021 ∙ by Hengrui Zhang, et al. ∙ 45

We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 21

page 22

Code Repositories

CCA-SSG

Codes for 'From Canonical Correlation Analysis to Self-supervised Graph Neural Networks'. https://arxiv.org/abs/2106.12484


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.