Transformers for Headline Selection for Russian News Clusters

06/19/2021
by   Pavel Voropaev, et al.
0

In this paper, we explore various multilingual and Russian pre-trained transformer-based models for the Dialogue Evaluation 2021 shared task on headline selection. Our experiments show that the combined approach is superior to individual multilingual and monolingual models. We present an analysis of a number of ways to obtain sentence embeddings and learn a ranking model on top of them. We achieve the result of 87.28 private test sets respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2021

Are the Multilingual Models Better? Improving Czech Sentiment with Transformers

In this paper, we aim at improving Czech sentiment with transformer-base...
research
07/21/2021

Comparison of Czech Transformers on Text Classification Tasks

In this paper, we present our progress in pre-training monolingual Trans...
research
08/11/2023

ChatGPT-based Investment Portfolio Selection

In this paper, we explore potential uses of generative AI models, such a...
research
01/11/2022

The GINCO Training Dataset for Web Genre Identification of Documents Out in the Wild

This paper presents a new training dataset for automatic genre identific...
research
12/10/2020

Leveraging Transfer Learning for Reliable Intelligence Identification on Vietnamese SNSs (ReINTEL)

This paper proposed several transformer-based approaches for Reliable In...
research
10/06/2019

Multilingual Dialogue Generation with Shared-Private Memory

Existing dialog systems are all monolingual, where features shared among...

Please sign up or login with your details

Forgot password? Click here to reset