Improving Tokenisation by Alternative Treatment of Spaces

04/08/2022
by   Edward Gow-Smith, et al.
0

Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity, and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens. Specifically, we apply this modification to the BPE and Unigram algorithms. We find that our modified algorithms lead to improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Intrinsically, we find our modified algorithms give more morphologically correct tokenisations, in particular when handling prefixes. Given the results of our experiments, we advocate for always treating spaces as individual tokens as an improved tokenisation method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2020

ParsBERT: Transformer-based Model for Persian Language Understanding

The surge of pre-trained language models has begun a new era in the fiel...
research
11/15/2019

A Subword Level Language Model for Bangla Language

Language models are at the core of natural language processing. The abil...
research
11/10/2019

Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding

Attention-based models have shown significant improvement over tradition...
research
05/23/2023

On Robustness of Finetuned Transformer-based NLP Models

Transformer-based pretrained models like BERT, GPT-2 and T5 have been fi...
research
05/25/2021

Context-Sensitive Visualization of Deep Learning Natural Language Processing Models

The introduction of Transformer neural networks has changed the landscap...
research
04/15/2021

Unmasking the Mask – Evaluating Social Biases in Masked Language Models

Masked Language Models (MLMs) have shown superior performances in numero...
research
01/02/2021

Superbizarre Is Not Superb: Improving BERT's Interpretations of Complex Words with Derivational Morphology

How does the input segmentation of pretrained language models (PLMs) aff...

Please sign up or login with your details

Forgot password? Click here to reset