Training and Evaluation of a Multilingual Tokenizer for GPT-SW3

04/28/2023
by   Felix Stollenwerk, et al.
0

This paper provides a detailed discussion of the multilingual tokenizer used for GPT-SW3. It was trained on the Nordic Pile using the SentencePiece library and the BPE algorithm. We outline the tokenizer's most important features and share details on its learned vocabulary. In addition, we systematically analyze the properties and evaluate the performance of the tokenizer with regard to the different languages present in the data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset