Pretraining and Fine-Tuning Strategies for Sentiment Analysis of Latvian Tweets

10/23/2020
by   Gaurish Thakkar, et al.
0

In this paper, we present various pre-training strategies that aid in im-proving the accuracy of the sentiment classification task. We, at first, pre-trainlanguage representation models using these strategies and then fine-tune them onthe downstream task. Experimental results on a time-balanced tweet evaluation setshow the improvement over the previous technique. We achieve 76 substantial improvement over pre-vious work

READ FULL TEXT
research
10/11/2022

Transfer Learning with Joint Fine-Tuning for Multimodal Sentiment Analysis

Most existing methods focus on sentiment analysis of textual data. Howev...
research
07/24/2020

FiSSA at SemEval-2020 Task 9: Fine-tuned For Feelings

In this paper, we present our approach for sentiment classification on S...
research
07/26/2020

Reed at SemEval-2020 Task 9: Fine-Tuning and Bag-of-Words Approaches to Code-Mixed Sentiment Analysis

We explore the task of sentiment analysis on Hinglish (code-mixed Hindi-...
research
04/17/2022

Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis

As an important task in sentiment analysis, Multimodal Aspect-Based Sent...
research
11/06/2019

SentiLR: Linguistic Knowledge Enhanced Language Representation for Sentiment Analysis

Most of the existing pre-trained language representation models neglect ...
research
05/31/2021

On the Interplay Between Fine-tuning and Composition in Transformers

Pre-trained transformer language models have shown remarkable performanc...
research
06/05/2022

Speech Detection Task Against Asian Hate: BERT the Central, While Data-Centric Studies the Crucial

With the epidemic continuing, hatred against Asians is intensifying in c...

Please sign up or login with your details

Forgot password? Click here to reset