How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese

06/16/2023
by   Takuro Fujii, et al.
0

This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in scriptio continua languages where no explicit spaces exist between words, using Japanese as a case study. The tokenizer for such languages often consists of a morphological analyzer and a subword tokenizer, requiring us to conduct a comprehensive study of all possible pairs. However, previous studies lack this comprehensiveness. We therefore train extensive sets of tokenizers, build a PLM using each, and measure the downstream performance on a wide range of tasks. Our results demonstrate that each downstream task has a different optimal morphological analyzer, and that it is better to use Byte-Pair-Encoding or Unigram rather than WordPiece as a subword tokenizer, regardless of the type of task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2022

Impact of Tokenization on Language Models: An Analysis for Turkish

Tokenization is an important text preprocessing step to prepare input to...
research
02/24/2022

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

Effectively finetuning pretrained language models (PLMs) is critical for...
research
02/26/2023

User-Centric Evaluation of OCR Systems for Kwak'wala

There has been recent interest in improving optical character recognitio...
research
09/13/2023

Benchmarking Procedural Language Understanding for Low-Resource Languages: A Case Study on Turkish

Understanding procedural natural language (e.g., step-by-step instructio...
research
02/01/2023

On the Role of Morphological Information for Contextual Lemmatization

Lemmatization is a Natural Language Processing (NLP) task which consists...
research
03/15/2022

Data Contamination: From Memorization to Exploitation

Pretrained language models are typically trained on massive web-based da...
research
06/13/2023

Tokenization with Factorized Subword Encoding

In recent years, language models have become increasingly larger and mor...

Please sign up or login with your details

Forgot password? Click here to reset