Tokenization with Factorized Subword Encoding

06/13/2023
by   David Samuel, et al.
0

In recent years, language models have become increasingly larger and more complex. However, the input representations for these models continue to rely on simple and greedy subword tokenization methods. In this paper, we propose a novel tokenization method that factorizes subwords onto discrete triplets using a VQ-VAE model. The effectiveness of the proposed tokenization method, referred to as the Factorizer, is evaluated on language modeling and morpho-syntactic tasks for 7 diverse languages. Results indicate that this method is more appropriate and robust for morphological tasks than the commonly used byte-pair encoding (BPE) tokenization algorithm.

READ FULL TEXT
research
05/15/2021

A Cognitive Regularizer for Language Modeling

The uniform information density (UID) hypothesis, which posits that spea...
research
03/16/2022

KinyaBERT: a Morphology-aware Kinyarwanda Language Model

Pre-trained language models such as BERT have been successful at tacklin...
research
03/02/2018

Syntax-Aware Language Modeling with Recurrent Neural Networks

Neural language models (LMs) are typically trained using only lexical fe...
research
04/25/2021

Reranking Machine Translation Hypotheses with Structured and Web-based Language Models

In this paper, we investigate the use of linguistically motivated and co...
research
04/29/2020

Evaluating the Role of Language Typology in Transformer-Based Multilingual Text Classification

As NLP tools become ubiquitous in today's technological landscape, they ...
research
06/16/2023

How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese

This paper investigates the effect of tokenizers on the downstream perfo...
research
05/20/2023

Analogy in Contact: Modeling Maltese Plural Inflection

Maltese is often described as having a hybrid morphological system resul...

Please sign up or login with your details

Forgot password? Click here to reset