Char2Subword: Extending the Subword Embedding Space from Pre-trained Models Using Robust Character Compositionality

10/24/2020
by   Gustavo Aguilar, et al.
28

Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models. BPE provides multiple benefits, such as handling the out-of-vocabulary problem and reducing vocabulary sparsity. However, this process is defined from the pre-training data statistics, making the tokenization on different domains susceptible to infrequent spelling sequences (e.g., misspellings as in social media or character-level adversarial attacks). On the other hand, pure character-level models, though robust to misspellings, often lead to unreasonably large sequence lengths and make it harder for the model to learn meaningful contiguous characters. To alleviate these challenges, we propose a character-based subword transformer module (char2subword) that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed. We show our method's effectiveness by outperforming a vanilla multilingual BERT on the linguistic code-switching evaluation (LinCE) benchmark.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2020

KR-BERT: A Small-Scale Korean-Specific Language Model

Since the appearance of BERT, recent works including XLNet and RoBERTa u...
research
11/03/2020

CharBERT: Character-aware Pre-trained Language Model

Most pre-trained language models (PLMs) construct word representations a...
research
02/27/2023

Elementwise Language Representation

We propose a new technique for computational language representation cal...
research
10/20/2020

CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters

Due to the compelling improvements brought by BERT, many recent represen...
research
05/11/2016

Tweet2Vec: Character-Based Distributed Representations for Social Media

Text from social media provides a set of challenges that can cause tradi...
research
10/13/2022

Tone prediction and orthographic conversion for Basaa

In this paper, we present a seq2seq approach for transliterating mission...
research
03/11/2021

CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation

Pipelined NLP systems have largely been superseded by end-to-end neural ...

Please sign up or login with your details

Forgot password? Click here to reset