Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks

by   Tosin P. Adewumi, et al.
Luleå University of Technology

Word2Vec is a prominent tool for Natural Language Processing (NLP) tasks. Similar inspiration is found in distributed embeddings for state-of-the-art (sota) deep neural networks. However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the original model released by Mikolov. Both intrinsic and extrinsic (downstream) evaluations, including Named Entity Recognition (NER) and Sentiment Analysis (SA) were carried out. The downstream tasks reveal that the best model is task-specific, high analogy scores don't necessarily correlate positively with F1 scores and the same applies for more data. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream (NER SA) performance compared to Mikolov's model, trained on 100 billion word corpus.


page 5

page 8

page 9


Exploring Swedish English fastText Embeddings with the Transformer

In this paper, our main contributions are that embeddings from relativel...

TENER: Adapting Transformer Encoder for Named Entity Recognition

The Bidirectional long short-term memory networks (BiLSTM) have been wid...

TENER: Adapting Transformer Encoder for Name Entity Recognition

The Bidirectional long short-term memory networks (BiLSTM) have been wid...

Corpora Compared: The Case of the Swedish Gigaword Wikipedia Corpora

In this work, we show that the difference in performance of embeddings f...

Alternative Weighting Schemes for ELMo Embeddings

ELMo embeddings (Peters et. al, 2018) had a huge impact on the NLP commu...

Data-Efficient French Language Modeling with CamemBERTa

Recent advances in NLP have significantly improved the performance of la...

Vacaspati: A Diverse Corpus of Bangla Literature

Bangla (or Bengali) is the fifth most spoken language globally; yet, the...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset