Discovering linguistic (ir)regularities in word embeddings through max-margin separating hyperplanes

03/07/2020
by   Noel Kennedy, et al.
0

We experiment with new methods for learning how related words are positioned relative to each other in word embedding spaces. Previous approaches learned constant vector offsets: vectors that point from source tokens to target tokens with an assumption that these offsets were parallel to each other. We show that the offsets between related tokens are closer to orthogonal than parallel, and that they have low cosine similarities. We proceed by making a different assumption; target tokens are linearly separable from source and un-labeled tokens. We show that a max-margin hyperplane can separate target tokens and that vectors orthogonal to this hyperplane represent the relationship between source and targets. We find that this representation of the relationship obtains the best results in dis-covering linguistic regularities. We experiment with vector space models trained by a variety of algorithms (Word2vec: CBOW/skip-gram, fastText, or GloVe), and various word context choices such as linear word-order, syntax dependency grammars, and with and without knowledge of word position. These experiments show that our model, SVMCos, is robust to a range of experimental choices when training word embeddings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2015

Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space

There is rising interest in vector-space word embeddings and their use i...
research
10/26/2020

Robust and Consistent Estimation of Word Embedding for Bangla Language by fine-tuning Word2Vec Model

Word embedding or vector representation of word holds syntactical and se...
research
06/04/2022

Comparing Performance of Different Linguistically-Backed Word Embeddings for Cyberbullying Detection

In most cases, word embeddings are learned only from raw tokens or in so...
research
06/09/2015

WordRank: Learning Word Embeddings via Robust Ranking

Embedding words in a vector space has gained a lot of attention in recen...
research
06/28/2016

Hierarchical Neural Language Models for Joint Representation of Streaming Documents and their Content

We consider the problem of learning distributed representations for docu...
research
08/16/2021

IsoScore: Measuring the Uniformity of Vector Space Utilization

The recent success of distributed word representations has led to an inc...
research
11/02/2022

Boosting word frequencies in authorship attribution

In this paper, I introduce a simple method of computing relative word fr...

Please sign up or login with your details

Forgot password? Click here to reset