FiSSA at SemEval-2020 Task 9: Fine-tuned For Feelings

07/24/2020
by   Bertelt Braaksma, et al.
0

In this paper, we present our approach for sentiment classification on Spanish-English code-mixed social media data in the SemEval-2020 Task 9. We investigate performance of various pre-trained Transformer models by using different fine-tuning strategies. We explore both monolingual and multilingual models with the standard fine-tuning method. Additionally, we propose a custom model that we fine-tune in two steps: once with a language modeling objective, and once with a task-specific objective. Although two-step fine-tuning improves sentiment classification performance over the base model, the large multilingual XLM-RoBERTa model achieves best weighted F1-score with 0.537 on development data and 0.739 on test data. With this score, our team jupitter placed tenth overall in the competition.

READ FULL TEXT

page 4

page 5

research
04/23/2020

UHH-LT LT2 at SemEval-2020 Task 12: Fine-Tuning of Pre-Trained Transformer Networks for Offensive Language Detection

Fine-tuning of pre-trained transformer networks such as BERT yield state...
research
08/22/2020

HinglishNLP: Fine-tuned Language Models for Hinglish Sentiment Detection

Sentiment analysis for code-mixed social media text continues to be an u...
research
10/23/2020

Pretraining and Fine-Tuning Strategies for Sentiment Analysis of Latvian Tweets

In this paper, we present various pre-training strategies that aid in im...
research
07/26/2020

Reed at SemEval-2020 Task 9: Fine-Tuning and Bag-of-Words Approaches to Code-Mixed Sentiment Analysis

We explore the task of sentiment analysis on Hinglish (code-mixed Hindi-...
research
05/10/2022

Human Language Modeling

Natural language is generated by people, yet traditional language modeli...
research
04/19/2019

Suggestion Mining from Online Reviews using ULMFiT

In this paper we present our approach and the system description for Sub...

Please sign up or login with your details

Forgot password? Click here to reset