Charformer: Fast Character Transformers via Gradient-based Subword Tokenization

06/23/2021
by   Yi Tay, et al.
4

State-of-the-art models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. In this paper, we propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model. To this end, we introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a position-wise fashion using a block scoring network. We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level. Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28 work paves the way for highly performant token-free models that are trained completely end-to-end.

READ FULL TEXT
research
04/10/2021

Non-autoregressive Transformer-based End-to-end ASR using BERT

Transformer-based models have led to a significant innovation in various...
research
12/14/2022

MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling

Static subword tokenization algorithms have been an essential component ...
research
08/13/2018

Neural Semi-Markov Conditional Random Fields for Robust Character-Based Part-of-Speech Tagging

Character-level models of tokens have been shown to be effective at deal...
research
02/26/2020

Sparse Sinkhorn Attention

We propose Sparse Sinkhorn Attention, a new efficient and sparse method ...
research
03/07/2022

HyperMixer: An MLP-based Green AI Alternative to Transformers

Transformer-based architectures are the model of choice for natural lang...
research
03/27/2023

An Information Extraction Study: Take In Mind the Tokenization!

Current research on the advantages and trade-offs of using characters, i...
research
11/17/2022

Efficient Transformers with Dynamic Token Pooling

Transformers achieve unrivalled performance in modelling language, but r...

Please sign up or login with your details

Forgot password? Click here to reset