BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages

10/05/2017
by   Benjamin Heinzerling, et al.
0

We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages bet- ter than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages

We present the IndicNLP corpus, a large-scale, general-domain corpus con...
research
03/11/2019

ETNLP: A Toolkit for Extraction, Evaluation and Visualization of Pre-trained Word Embeddings

In this paper, we introduce a comprehensive toolkit, ETNLP, which can ev...
research
05/31/2023

XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech

We present XPhoneBERT, the first multilingual model pre-trained to learn...
research
07/18/2020

On a Novel Application of Wasserstein-Procrustes for Unsupervised Cross-Lingual Learning

The emergence of unsupervised word embeddings, pre-trained on very large...
research
02/02/2018

Voting patterns in 2016: Exploration using multilevel regression and poststratification (MRP) on pre-election polls

We analyzed 2012 and 2016 YouGov pre-election polls in order to understa...
research
07/06/2020

Sosed: a tool for finding similar software projects

In this paper, we present Sosed, a tool for discovering similar software...
research
05/28/2023

Emergent Modularity in Pre-trained Transformers

This work examines the presence of modularity in pre-trained Transformer...

Please sign up or login with your details

Forgot password? Click here to reset