Masked ELMo: An evolution of ELMo towards fully contextual RNN language models

10/08/2020
by   Gregory Senay, et al.
0

This paper presents Masked ELMo, a new RNN-based model for language model pre-training, evolved from the ELMo language model. Contrary to ELMo which only uses independent left-to-right and right-to-left contexts, Masked ELMo learns fully bidirectional word representations. To achieve this, we use the same Masked language model objective as BERT. Additionally, thanks to optimizations on the LSTM neuron, the integration of mask accumulation and bidirectional truncated backpropagation through time, we have increased the training speed of the model substantially. All these improvements make it possible to pre-train a better language model than ELMo while maintaining a low computational cost. We evaluate Masked ELMo by comparing it to ELMo within the same protocol on the GLUE benchmark, where our model outperforms significantly ELMo and is competitive with transformer approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2023

Meet in the Middle: A New Pre-training Paradigm

Most language models (LMs) are trained and applied in an autoregressive ...
research
05/24/2022

On the Role of Bidirectionality in Language Model Pre-Training

Prior work on language model pre-training has explored different archite...
research
08/11/2020

Transformer with Bidirectional Decoder for Speech Recognition

Attention-based models have made tremendous progress on end-to-end autom...
research
07/01/2023

BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer

BatGPT is a large-scale language model designed and trained jointly by W...
research
04/17/2020

Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning

Even though BERT achieves successful performance improvements in various...
research
10/11/2020

Incremental Processing in the Age of Non-Incremental Encoders: An Empirical Assessment of Bidirectional Models for Incremental NLU

While humans process language incrementally, the best language encoders ...
research
10/29/2020

Contextual BERT: Conditioning the Language Model Using a Global State

BERT is a popular language model whose main pre-training task is to fill...

Please sign up or login with your details

Forgot password? Click here to reset