Perturbation-based Self-supervised Attention for Attention Bias in Text Classification

05/25/2023
by   Huawen Feng, et al.
0

In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This paper proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.

READ FULL TEXT

page 2

page 8

research
08/07/2019

Self-supervised Attention Model for Weakly Labeled Audio Event Classification

We describe a novel weakly labeled Audio Event Classification approach b...
research
05/19/2023

Zero-Shot Text Classification via Self-Supervised Tuning

Existing solutions to zero-shot text classification either conduct promp...
research
06/15/2022

Self-Supervised Implicit Attention: Guided Attention by The Model Itself

We propose Self-Supervised Implicit Attention (SSIA), a new approach tha...
research
05/13/2022

Interlock-Free Multi-Aspect Rationalization for Text Classification

Explanation is important for text classification tasks. One prevalent ty...
research
04/15/2021

Consistency Training with Virtual Adversarial Discrete Perturbation

We propose an effective consistency training framework that enforces a t...
research
11/16/2022

Disentangling Task Relations for Few-shot Text Classification via Self-Supervised Hierarchical Task Clustering

Few-Shot Text Classification (FSTC) imitates humans to learn a new text ...
research
06/03/2021

Exploring Distantly-Labeled Rationales in Neural Network Models

Recent studies strive to incorporate various human rationales into neura...

Please sign up or login with your details

Forgot password? Click here to reset