CNN-Trans-Enc: A CNN-Enhanced Transformer-Encoder On Top Of Static BERT representations for Document Classification

09/13/2022
by   Charaf Eddine Benarab, et al.
0

BERT achieves remarkable results in text classification tasks, it is yet not fully exploited, since only the last layer is used as a representation output for downstream classifiers. The most recent studies on the nature of linguistic features learned by BERT, suggest that different layers focus on different kinds of linguistic features. We propose a CNN-Enhanced Transformer-Encoder model which is trained on top of fixed BERT [CLS] representations from all layers, employing Convolutional Neural Networks to generate QKV feature maps inside the Transformer-Encoder, instead of linear projections of the input into the embedding space. CNN-Trans-Enc is relatively small as a downstream classifier and doesn't require any fine-tuning of BERT, as it ensures an optimal use of the [CLS] representations from all layers, leveraging different linguistic features with more meaningful, and generalizable QKV representations of the input. Using BERT with CNN-Trans-Enc keeps 98.9% and 94.8% of current state-of-the-art performance on the IMDB and SST-5 datasets respectably, while obtaining new state-of-the-art on YELP-5 with 82.23 (8.9% improvement), and on Amazon-Polarity with 0.98% (0.2% improvement) (K-fold Cross Validation on a 1M sample subset from both datasets). On the AG news dataset CNN-Trans-Enc achieves 99.94% of the current state-of-the-art, and achieves a new top performance with an average accuracy of 99.51% on DBPedia-14. Index terms: Text Classification, Natural Language Processing, Convolutional Neural Networks, Transformers, BERT

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2019

How to Fine-Tune BERT for Text Classification?

Language model pre-training has proven to be useful in learning universa...
research
08/22/2019

Text Summarization with Pretrained Encoders

Bidirectional Encoder Representations from Transformers (BERT) represent...
research
02/25/2019

Pretraining-Based Natural Language Generation for Text Summarization

In this paper, we propose a novel pretraining-based encoder-decoder fram...
research
01/18/2022

Hierarchical Neural Network Approaches for Long Document Classification

Text classification algorithms investigate the intricate relationships b...
research
11/09/2020

BERT-JAM: Boosting BERT-Enhanced Neural Machine Translation with Joint Attention

BERT-enhanced neural machine translation (NMT) aims at leveraging BERT-e...
research
10/23/2019

Hierarchical Transformers for Long Document Classification

BERT, which stands for Bidirectional Encoder Representations from Transf...
research
10/24/2022

Explaining Translationese: why are Neural Classifiers Better and what do they Learn?

Recent work has shown that neural feature- and representation-learning, ...

Please sign up or login with your details

Forgot password? Click here to reset