CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

07/01/2021
by   Dong Wang, et al.
1

Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes. To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes. To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking. By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations. Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks. And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2020

Semantics-Aware Inferential Network for Natural Language Understanding

For natural language understanding tasks, either machine reading compreh...
research
09/19/2021

Adversarial Training with Contrastive Learning in NLP

For years, adversarial training has been extensively studied in natural ...
research
04/11/2022

Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

Pre-trained sequence-to-sequence language models have led to widespread ...
research
10/25/2021

CLLD: Contrastive Learning with Label Distance for Text Classificatioin

Existed pre-trained models have achieved state-of-the-art performance on...
research
01/30/2023

On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Semantic parsing is a technique aimed at constructing a structured repre...
research
06/07/2023

PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

The increasing reliance on Large Language Models (LLMs) across academia ...
research
09/07/2020

Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding

Recent advances in natural language generation have introduced powerful ...

Please sign up or login with your details

Forgot password? Click here to reset