DeepEmotex: Classifying Emotion in Text Messages using Deep Transfer Learning

06/12/2022
by   Maryam Hasan, et al.
0

Transfer learning has been widely used in natural language processing through deep pretrained language models, such as Bidirectional Encoder Representations from Transformers and Universal Sentence Encoder. Despite the great success, language models get overfitted when applied to small datasets and are prone to forgetting when fine-tuned with a classifier. To remedy this problem of forgetting in transferring deep pretrained language models from one domain to another domain, existing efforts explore fine-tuning methods to forget less. We propose DeepEmotex an effective sequential transfer learning method to detect emotion in text. To avoid forgetting problem, the fine-tuning step is instrumented by a large amount of emotion-labeled data collected from Twitter. We conduct an experimental study using both curated Twitter data sets and benchmark data sets. DeepEmotex models achieve over 91 multi-class emotion classification on test dataset. We evaluate the performance of the fine-tuned DeepEmotex models in classifying emotion in EmoInt and Stimulus benchmark datasets. The models correctly classify emotion in 73 the instances in the benchmark datasets. The proposed DeepEmotex-BERT model outperforms Bi-LSTM result on the benchmark datasets by 23 effect of the size of the fine-tuning dataset on the accuracy of our models. Our evaluation results show that fine-tuning with a large set of emotion-labeled data improves both the robustness and effectiveness of the resulting target task model.

READ FULL TEXT
research
01/18/2018

Fine-tuned Language Models for Text Classification

Transfer learning has revolutionized computer vision, but existing appro...
research
04/19/2019

An Evaluation of Transfer Learning for Classifying Sales Engagement Emails at Large Scale

This paper conducts an empirical investigation to evaluate transfer lear...
research
09/14/2023

PerPLM: Personalized Fine-tuning of Pretrained Language Models via Writer-specific Intermediate Learning and Prompts

The meanings of words and phrases depend not only on where they are used...
research
09/08/2019

Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models

This study compares the effectiveness and robustness of multi-class cate...
research
02/09/2021

Transfer Learning Approach for Arabic Offensive Language Detection System – BERT-Based Model

Developing a system to detect online offensive language is very importan...
research
09/15/2023

Large Language Models for Failure Mode Classification: An Investigation

In this paper we present the first investigation into the effectiveness ...
research
08/01/2021

Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning

Masked language models (MLMs) are pretrained with a denoising objective ...

Please sign up or login with your details

Forgot password? Click here to reset