Sentence Embeddings for Russian NLU

10/29/2019
by   Dmitry Popov, et al.
0

We investigate the performance of sentence embeddings models on several tasks for the Russian language. In our comparison, we include such tasks as multiple choice question answering, next sentence prediction, and paraphrase identification. We employ FastText embeddings as a baseline and compare it to ELMo and BERT embeddings. We conduct two series of experiments, using both unsupervised (i.e., based on similarity measure only) and supervised approaches for the tasks. Finally, we present datasets for multiple choice question answering and next sentence prediction in Russian.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2022

Efficient comparison of sentence embeddings

The domain of natural language processing (NLP), which has greatly evolv...
research
10/11/2019

Learning Analogy-Preserving Sentence Embeddings for Answer Selection

Answer selection aims at identifying the correct answer for a given ques...
research
11/15/2018

Exploiting Sentence Embedding for Medical Question Answering

Despite the great success of word embedding, sentence embedding remains ...
research
04/20/2018

Learning Semantic Textual Similarity from Conversations

We present a novel approach to learn representations for sentence-level ...
research
06/12/2018

Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering

In this paper, we analyze several neural network designs (and their vari...
research
06/04/2019

Pitfalls in the Evaluation of Sentence Embeddings

Deep learning models continuously break new records across different NLP...
research
10/21/2020

Unsupervised Deep Learning based Multiple Choices Question Answering: Start Learning from Basic Knowledge

In this paper, we study the possibility of almost unsupervised Multiple ...

Please sign up or login with your details

Forgot password? Click here to reset