Stacked DeBERT: All Attention in Incomplete Data for Text Classification

01/01/2020
by   Gwenaelle Cunha Sergio, et al.
0

In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2022

Myers-Briggs personality classification from social media text using pre-trained language models

In Natural Language Processing, the use of pre-trained language models h...
research
09/21/2022

Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers

Text classification has become widely used in various natural language p...
research
03/14/2023

Input-length-shortening and text generation via attention values

Identifying words that impact a task's performance more than others is a...
research
05/14/2020

A pre-training technique to localize medical BERT and enhance BioBERT

Bidirectional Encoder Representations from Transformers (BERT) models fo...
research
02/24/2021

RoBERTa-wwm-ext Fine-Tuning for Chinese Text Classification

Bidirectional Encoder Representations from Transformers (BERT) have show...
research
06/02/2021

T-BERT – Model for Sentiment Analysis of Micro-blogs Integrating Topic Model and BERT

Sentiment analysis (SA) has become an extensive research area in recent ...
research
03/03/2021

An Iterative Contextualization Algorithm with Second-Order Attention

Combining the representations of the words that make up a sentence into ...

Please sign up or login with your details

Forgot password? Click here to reset