Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language

05/17/2019
by   Yuri Kuratov, et al.
0

The paper introduces methods of adaptation of multilingual masked language models for a specific language. Pre-trained bidirectional language models show state-of-the-art performance on a wide range of tasks including reading comprehension, natural language inference, and sentiment analysis. At the moment there are two alternative approaches to train such models: monolingual and multilingual. While language specific models show superior performance, multilingual models allow to perform a transfer from one language to another and solve tasks for different languages simultaneously. This work shows that transfer learning from a multilingual model to monolingual model results in significant growth of performance on such tasks as reading comprehension, paraphrase detection, and sentiment analysis. Furthermore, multilingual initialization of monolingual model substantially reduces training time. Pre-trained models for the Russian language are open sourced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2019

Improving Pre-Trained Multilingual Models with Vocabulary Expansion

Recently, pre-trained language models have achieved remarkable success i...
research
03/17/2021

Investigating Monolingual and Multilingual BERTModels for Vietnamese Aspect Category Detection

Aspect category detection (ACD) is one of the challenging tasks in the A...
research
04/06/2022

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

Structured pruning has been extensively studied on monolingual pre-train...
research
07/05/2023

Multilingual Controllable Transformer-Based Lexical Simplification

Text is by far the most ubiquitous source of knowledge and information a...
research
07/19/2020

Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks

BERT (Bidirectional Encoder Representations from Transformers) and ALBER...
research
11/25/2021

TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect

Pretrained contextualized text representation models learn an effective ...
research
12/07/2020

Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual Conversational Agent Models

With the recent explosion in popularity of voice assistant devices, ther...

Please sign up or login with your details

Forgot password? Click here to reset