Simple Contrastive Representation Adversarial Learning for NLP Tasks

11/26/2021
by   Deshui Miao, et al.
0

Self-supervised learning approach like contrastive learning is attached great attention in natural language processing. It uses pairs of training data augmentations to build a classification task for an encoder with well representation ability. However, the construction of learning pairs over contrastive learning is much harder in NLP tasks. Previous works generate word-level changes to form pairs, but small transforms may cause notable changes on the meaning of sentences as the discrete and sparse nature of natural language. In this paper, adversarial training is performed to generate challenging and harder learning adversarial examples over the embedding space of NLP as learning pairs. Using contrastive learning improves the generalization ability of adversarial training because contrastive loss can uniform the sample distribution. And at the same time, adversarial training also enhances the robustness of contrastive learning. Two novel frameworks, supervised contrastive adversarial learning (SCAL) and unsupervised SCAL (USCAL), are proposed, which yields learning pairs by utilizing the adversarial training for contrastive learning. The label-based loss of supervised tasks is exploited to generate adversarial examples while unsupervised tasks bring contrastive loss. To validate the effectiveness of the proposed framework, we employ it to Transformer-based models for natural language understanding, sentence semantic textual similarity and adversarial learning tasks. Experimental results on GLUE benchmark tasks show that our fine-tuned supervised method outperforms BERT_base over 1.75%. We also evaluate our unsupervised method on semantic textual similarity (STS) tasks, and our method gets 77.29% with BERT_base. The robustness of our approach conducts state-of-the-art results under multiple adversarial datasets on NLI tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2023

SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification

Despite their promising performance across various natural language proc...
research
10/22/2020

Contrastive Learning with Adversarial Examples

Contrastive learning (CL) is a popular technique for self-supervised lea...
research
09/19/2021

Adversarial Training with Contrastive Learning in NLP

For years, adversarial training has been extensively studied in natural ...
research
09/16/2022

Enhance the Visual Representation via Discrete Adversarial Training

Adversarial Training (AT), which is commonly accepted as one of the most...
research
07/13/2020

Generating Fluent Adversarial Examples for Natural Languages

Efficiently building an adversarial attacker for natural language proces...
research
04/13/2022

A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets

Health mention classification deals with the disease detection in a give...
research
10/19/2022

Targeted Adversarial Self-Supervised Learning

Recently, unsupervised adversarial training (AT) has been extensively st...

Please sign up or login with your details

Forgot password? Click here to reset