Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks

03/23/2020
by   Tosin P. Adewumi, et al.
29

Word2Vec is a prominent tool for Natural Language Processing (NLP) tasks. Similar inspiration is found in distributed embeddings for state-of-the-art (sota) deep neural networks. However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the original model released by Mikolov. Both intrinsic and extrinsic (downstream) evaluations, including Named Entity Recognition (NER) and Sentiment Analysis (SA) were carried out. The downstream tasks reveal that the best model is task-specific, high analogy scores don't necessarily correlate positively with F1 scores and the same applies for more data. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream (NER SA) performance compared to Mikolov's model, trained on 100 billion word corpus.

READ FULL TEXT

page 5

page 8

page 9

research
07/23/2020

Exploring Swedish English fastText Embeddings with the Transformer

In this paper, our main contributions are that embeddings from relativel...
research
11/10/2019

TENER: Adapting Transformer Encoder for Named Entity Recognition

The Bidirectional long short-term memory networks (BiLSTM) have been wid...
research
11/10/2019

TENER: Adapting Transformer Encoder for Name Entity Recognition

The Bidirectional long short-term memory networks (BiLSTM) have been wid...
research
11/06/2020

Corpora Compared: The Case of the Swedish Gigaword Wikipedia Corpora

In this work, we show that the difference in performance of embeddings f...
research
04/05/2019

Alternative Weighting Schemes for ELMo Embeddings

ELMo embeddings (Peters et. al, 2018) had a huge impact on the NLP commu...
research
06/02/2023

Data-Efficient French Language Modeling with CamemBERTa

Recent advances in NLP have significantly improved the performance of la...
research
07/11/2023

Vacaspati: A Diverse Corpus of Bangla Literature

Bangla (or Bengali) is the fifth most spoken language globally; yet, the...

Please sign up or login with your details

Forgot password? Click here to reset