What is the best recipe for character-level encoder-only modelling?

05/09/2023
by   Kris Cao, et al.
0

This paper aims to benchmark recent progress in language understanding models that output contextualised representations at the character level. Many such modelling architectures and methods to train those architectures have been proposed, but it is currently unclear what the relative contributions of the architecture vs. the pretraining objective are to final model performance. We explore the design space of such models, comparing architectural innovations and a variety of different pretraining objectives on a suite of evaluation tasks with a fixed training procedure in order to find the currently optimal way to build and train character-level BERT-like models. We find that our best performing character-level model exceeds the performance of a token-based model trained with the same settings on the same data, suggesting that character-level models are ready for more widespread adoption. Unfortunately, the best method to train character-level models still relies on a subword-level tokeniser during pretraining, and final model performance is highly dependent on tokeniser quality. We believe our results demonstrate the readiness of character-level models for multilingual language representation, and encourage NLP practitioners to try them as drop-in replacements for token-based models.

READ FULL TEXT
research
09/04/2021

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Masked language modeling (MLM), a self-supervised pretraining objective,...
research
05/04/2021

HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish

BERT-based models are currently used for solving nearly all Natural Lang...
research
01/27/2021

KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding

A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional ...
research
07/05/2018

A Boo(n) for Evaluating Architecture Performance

We point out important problems with the common practice of using the be...
research
03/12/2019

Character Eyes: Seeing Language through Character-Level Taggers

Character-level models have been used extensively in recent years in NLP...
research
08/13/2018

Neural Semi-Markov Conditional Random Fields for Robust Character-Based Part-of-Speech Tagging

Character-level models of tokens have been shown to be effective at deal...
research
05/26/2023

An Investigation of Noise in Morphological Inflection

With a growing focus on morphological inflection systems for languages w...

Please sign up or login with your details

Forgot password? Click here to reset