Improving Results on Russian Sentiment Datasets

07/28/2020
by   Anton Golubev, et al.
0

In this study, we test standard neural network architectures (CNN, LSTM, BiLSTM) and recently appeared BERT architectures on previous Russian sentiment evaluation datasets. We compare two variants of Russian BERT and show that for all sentiment tasks in this study the conversational variant of Russian BERT performs better. The best results were achieved by BERT-NLI model, which treats sentiment classification tasks as a natural language inference task. On one of the datasets, this model practically achieves the human level.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2021

Transfer Learning for Improving Results on Russian Sentiment Datasets

In this study, we test transfer learning approach on Russian sentiment b...
research
05/08/2020

SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics

We propose SentiBERT, a variant of BERT that effectively captures compos...
research
01/24/2020

PoWER-BERT: Accelerating BERT inference for Classification Tasks

BERT has emerged as a popular model for natural language understanding. ...
research
11/03/2021

BERT-DRE: BERT with Deep Recursive Encoder for Natural Language Sentence Matching

This paper presents a deep neural architecture, for Natural Language Sen...
research
03/28/2019

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

In the natural language processing literature, neural networks are becom...
research
10/24/2022

Explaining Translationese: why are Neural Classifiers Better and what do they Learn?

Recent work has shown that neural feature- and representation-learning, ...
research
10/20/2021

Distributionally Robust Classifiers in Sentiment Analysis

In this paper, we propose sentiment classification models based on BERT ...

Please sign up or login with your details

Forgot password? Click here to reset