ByT5: Towards a token-free future with pre-trained byte-to-byte models

05/28/2021
by   Linting Xue, et al.
0

Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.

READ FULL TEXT

page 6

page 8

research
08/01/2021

Learning to Look Inside: Augmenting Token-Based Encoders with Character-Level Information

Commonly-used transformer language models depend on a tokenization schem...
research
08/02/2022

Lost in Space Marking

We look at a decision taken early in training a subword tokenizer, namel...
research
05/16/2021

Doc2Dict: Information Extraction as Text Generation

Typically, information extraction (IE) requires a pipeline approach: fir...
research
07/18/2023

Text vectorization via transformer-based language models and n-gram perplexities

As the probability (and thus perplexity) of a text is calculated based o...
research
07/16/2017

End-to-End Information Extraction without Token-Level Supervision

Most state-of-the-art information extraction approaches rely on token-le...
research
03/15/2023

GPT-4 Technical Report

We report the development of GPT-4, a large-scale, multimodal model whic...
research
03/27/2023

An Information Extraction Study: Take In Mind the Tokenization!

Current research on the advantages and trade-offs of using characters, i...

Please sign up or login with your details

Forgot password? Click here to reset