Outperforming Word2Vec on Analogy Tasks with Random Projections

12/20/2014
by   Abram Demski, et al.
0

We present a distributed vector representation based on a simplification of the BEAGLE system, designed in the context of the Sigma cognitive architecture. Our method does not require gradient-based training of neural networks, matrix decompositions as with LSA, or convolutions as with BEAGLE. All that is involved is a sum of random vectors and their pointwise products. Despite the simplicity of this technique, it gives state-of-the-art results on analogy problems, in most cases better than Word2Vec. To explain this success, we interpret it as a dimension reduction via random projection.

READ FULL TEXT
research
11/22/2018

Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

Random projections are able to perform dimension reduction efficiently f...
research
04/30/2021

Tensor Random Projection for Low Memory Dimension Reduction

Random projections reduce the dimension of a set of vectors while preser...
research
02/28/2019

Distance-Based Independence Screening for Canonical Analysis

This paper introduces a new method named Distance-based Independence Scr...
research
08/30/2019

Fast and Accurate Network Embeddings via Very Sparse Random Projection

We present FastRP, a scalable and performant algorithm for learning dist...
research
04/20/2016

Random Projection Estimation of Discrete-Choice Models with Large Choice Sets

We introduce sparse random projection, an important dimension-reduction ...
research
11/10/2020

Randomized Gram-Schmidt process with application to GMRES

A randomized Gram-Schmidt algorithm is developed for orthonormalization ...
research
04/01/2019

On the Power and Limitations of Random Features for Understanding Neural Networks

Recently, a spate of papers have provided positive theoretical results f...

Please sign up or login with your details

Forgot password? Click here to reset