Disentangled Contrastive Learning for Learning Robust Textual Representations

04/11/2021
by   Xiang Chen, et al.
0

Although the self-supervised pre-training of transformer models has resulted in the revolutionizing of natural language processing (NLP) applications and the achievement of state-of-the-art results with regard to various benchmarks, this process is still vulnerable to small and imperceptible permutations originating from legitimate inputs. Intuitively, the representations should be similar in the feature space with subtle input permutations, while large variations occur with different meanings. This motivates us to investigate the learning of robust textual representation in a contrastive manner. However, it is non-trivial to obtain opposing semantic instances for textual samples. In this study, we propose a disentangled contrastive learning method that separately optimizes the uniformity and alignment of representations without negative sampling. Specifically, we introduce the concept of momentum representation consistency to align features and leverage power normalization while conforming the uniformity. Our experimental results for the NLP benchmarks demonstrate that our approach can obtain better results compared with the baselines, as well as achieve promising improvements with invariance tests and adversarial attacks. The code is available in https://github.com/zjunlp/DCL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2023

SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification

Despite their promising performance across various natural language proc...
research
11/29/2022

Textual Enhanced Contrastive Learning for Solving Math Word Problems

Solving math word problems is the task that analyses the relation of qua...
research
07/25/2022

Equivariance and Invariance Inductive Bias for Learning from Insufficient Data

We are interested in learning robust models from insufficient data, with...
research
09/15/2021

SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations

While contrastive learning is proven to be an effective training strateg...
research
10/28/2021

Self-Supervised Learning Disentangled Group Representation as Feature

A good visual representation is an inference map from observations (imag...
research
05/16/2019

TraceWalk: Semantic-based Process Graph Embedding for Consistency Checking

Process consistency checking (PCC), an interdiscipline of natural langua...
research
03/03/2023

TrojText: Test-time Invisible Textual Trojan Insertion

In Natural Language Processing (NLP), intelligent neuron models can be s...

Please sign up or login with your details

Forgot password? Click here to reset