MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling

12/14/2022
by   Nathan Godey, et al.
0

Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pre-trained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.

READ FULL TEXT
research
07/20/2017

Syllable-aware Neural Language Models: A Failure to Beat Character-aware Ones

Syllabification does not seem to improve word-level RNN language modelin...
research
04/08/2020

CALM: Continuous Adaptive Learning for Language Modeling

Training large language representation models has become a standard in t...
research
06/23/2021

Charformer: Fast Character Transformers via Gradient-based Subword Tokenization

State-of-the-art models in natural language processing rely on separate ...
research
04/21/2019

Few-shot NLG with Pre-trained Language Model

Natural language generation (NLG) from structured data or knowledge is e...
research
05/24/2021

Neural Language Models for Nineteenth-Century English

We present four types of neural language models trained on a large histo...
research
09/29/2020

Improving Low Compute Language Modeling with In-Domain Embedding Initialisation

Many NLP applications, such as biomedical data and technical support, ha...
research
10/25/2021

Paradigm Shift in Language Modeling: Revisiting CNN for Modeling Sanskrit Originated Bengali and Hindi Language

Though there has been a large body of recent works in language modeling ...

Please sign up or login with your details

Forgot password? Click here to reset