Fortunately, Discourse Markers Can Enhance Language Models for Sentiment Analysis

01/06/2022
by   Liat Ein-Dor, et al.
0

In recent years, pretrained language models have revolutionized the NLP world, while achieving state of the art performance in various downstream tasks. However, in many cases, these models do not perform well when labeled data is scarce and the model is expected to perform in the zero or few shot setting. Recently, several works have shown that continual pretraining or performing a second phase of pretraining (inter-training) which is better aligned with the downstream task, can lead to improved results, especially in the scarce data setting. Here, we propose to leverage sentiment-carrying discourse markers to generate large-scale weakly-labeled data, which in turn can be used to adapt language models for sentiment analysis. Extensive experimental results show the value of our approach on various benchmark datasets, including the finance domain. Code, models and data are available at https://github.com/ibm/tslm-discourse-markers.

READ FULL TEXT
research
11/07/2022

AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages

In recent years, multilingual pre-trained language models have gained pr...
research
05/25/2022

ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data

Large pretrained language models have been performing increasingly well ...
research
05/24/2023

Sentiment Analysis in the Era of Large Language Models: A Reality Check

Sentiment analysis (SA) has been a long-standing research area in natura...
research
04/17/2022

Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis

As an important task in sentiment analysis, Multimodal Aspect-Based Sent...
research
05/15/2022

Adaptive Prompt Learning-based Few-Shot Sentiment Analysis

In the field of natural language processing, sentiment analysis via deep...
research
06/16/2023

Neural Priming for Sample-Efficient Adaptation

We propose Neural Priming, a technique for adapting large pretrained mod...

Please sign up or login with your details

Forgot password? Click here to reset