
Selfsupervised Regularization for Text Classification
Text classification is a widely studied problem and has broad applicatio...
read it

Group Contrastive SelfSupervised Learning on Graphs
We study selfsupervised learning on graphs using contrastive methods. A...
read it

Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset Augmentation Using Graph Theory
Most NLP datasets are manually labeled, so suffer from inconsistent labe...
read it

Selfsupervised Graph Neural Networks without explicit negative sampling
Real world data is mostly unlabeled or only few instances are labeled. M...
read it

Towards DomainAgnostic Contrastive Learning
Despite recent success, most contrastive selfsupervised learning method...
read it

Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Selfsupervised Pretraining
We propose the Graph Context Encoder (GCE), a simple but efficient appro...
read it

Higherorder Comparisons of Sentence Encoder Representations
Representational Similarity Analysis (RSA) is a technique developed by n...
read it
Contrastive Selfsupervised Learning for Graph Classification
Graph classification is a widely studied problem and has broad applications. In many realworld problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive selfsupervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widelyavailable unlabeled graphs without relying on humanprovided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods.
READ FULL TEXT
Comments
There are no comments yet.