Shuffle Divide: Contrastive Learning for Long Text

04/19/2023
by   Joonseok Lee, et al.
0

We propose a self-supervised learning method for long text documents based on contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple text augmentation algorithm that sets up a pretext task required for contrastive updates to BERT-based document embedding. SaD splits a document into two sub-documents containing randomly shuffled words in the entire documents. The sub-documents are considered positive examples, leaving all other documents in the corpus as negatives. After SaD, we repeat the contrastive update and clustering phases until convergence. It is naturally a time-consuming, cumbersome task to label text documents, and our method can help alleviate human efforts, which are most expensive resources in AI. We have empirically evaluated our method by performing unsupervised text classification on the 20 Newsgroups, Reuters-21578, BBC, and BBCSport datasets. In particular, our method pushes the current state-of-the-art, SS-SB-MT, on 20 Newsgroups by 20.94 Reuters-21578 and exceptionally-high accuracy performances (over 95 unsupervised classification on the BBC and BBCSport datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2020

Self-supervised Document Clustering Based on BERT with Data Augment

Contrastive learning is a good way to pursue discriminative unsupervised...
research
08/20/2021

Supervised Contrastive Learning for Interpretable Long Document Comparison

Recent advancements in deep learning techniques have transformed the are...
research
02/11/2022

Metadata-Induced Contrastive Learning for Zero-Shot Multi-Label Text Classification

Large-scale multi-label text classification (LMTC) aims to associate a d...
research
03/23/2020

Fast(er) Reconstruction of Shredded Text Documents via Self-Supervised Deep Asymmetric Metric Learning

The reconstruction of shredded documents consists in arranging the piece...
research
05/25/2023

Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning

Learning quality document embeddings is a fundamental problem in natural...
research
06/24/2015

Efficient Learning for Undirected Topic Models

Replicated Softmax model, a well-known undirected topic model, is powerf...
research
11/08/2019

Unsupervised Common Question Generation from Multiple Documents using Reinforced Contrastive Coordinator

Web search engines today return a ranked list of document links in respo...

Please sign up or login with your details

Forgot password? Click here to reset