Neural Random Projections for Language Modelling

07/02/2018
by   Davide Nunes, et al.
0

Neural network-based language models deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the neural network model about a combinatorial number of other patterns. We exploit the sparsity in natural language even further by encoding each unique input word using a reduced sparse random representation. In this paper, we propose an encoder for discrete inputs that uses random projections to allow for the learning of language models using significantly smaller parameter spaces when compared with similar neural network architectures. Furthermore, random projections also eliminate the dependency between a neural network architecture and the size of a pre-established dictionary. We investigate the properties of our encoding mechanism empirically, by evaluating its performance on the widely used Penn Treebank corpus, using several configurations of baseline feedforward neural network models. We show that guaranteeing approximately equidistant inner products between representations of unique discrete inputs is enough to provide the neural network model with enough information to learn useful distributed representations for these inputs. By not requiring prior enumeration of the lexicon, random projections allow us to face the dynamic and open character of natural languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2016

Compressing Neural Language Models by Sparse Word Representations

Neural networks are among the state-of-the-art techniques for language m...
research
05/06/2015

A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models

In this paper, we propose the new fixed-size ordinally-forgetting encodi...
research
12/22/2016

Continuous multilinguality with language vectors

Most existing models for multilingual natural language processing (NLP) ...
research
08/23/2017

A Neural Network Approach for Mixing Language Models

The performance of Neural Network (NN)-based language models is steadily...
research
12/02/2016

Alleviating Overfitting for Polysemous Words for Word Representation Estimation Using Lexicons

Though there are some works on improving distributed word representation...
research
06/04/2019

Transferable Neural Projection Representations

Neural word representations are at the core of many state-of-the-art nat...
research
03/12/2020

Learning distributed representations of graphs with Geo2DR

We present Geo2DR, a Python library for unsupervised learning on graph-s...

Please sign up or login with your details

Forgot password? Click here to reset