HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled Embedding of n-gram Statistics

by   Pedro Alonso, et al.

Recent advances in Deep Learning have led to a significant performance increase on several NLP tasks, however, the models become more and more computationally demanding. Therefore, this paper tackles the domain of computationally efficient algorithms for NLP tasks. In particular, it investigates distributed representations of n-gram statistics of texts. The representations are formed using hyperdimensional computing enabled embedding. These representations then serve as features, which are used as input to standard classifiers. We investigate the applicability of the embedding on one large and three small standard datasets for classification tasks using nine classifiers. The embedding achieved on par F1 scores while decreasing the time and memory requirements by several times compared to the conventional n-gram statistics, e.g., for one of the classifiers on a small dataset, the memory reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84 times, respectively. For many classifiers on the large dataset, the memory reduction was about 100 times and train and test speed-ups were over 100 times. More importantly, the usage of distributed representations formed via hyperdimensional computing allows dissecting the strict dependency between the dimensionality of the representation and the parameters of n-gram statistics, thus, opening a room for tradeoffs.


page 1

page 2

page 3

page 4


Transferable Neural Projection Representations

Neural word representations are at the core of many state-of-the-art nat...

Towards Evaluating the Robustness of Chinese BERT Classifiers

Recent advances in large-scale language representation models such as BE...

Learning Distributed Representations of Texts and Entities from Knowledge Base

We describe a neural network model that jointly learns distributed repre...

Sentiment Classification using N-gram IDF and Automated Machine Learning

We propose a sentiment classification method with a general machine lear...

BERTino: an Italian DistilBERT model

The recent introduction of Transformers language representation models a...

Rethinking Batch Normalization in Transformers

The standard normalization method for neural network (NN) models used in...

Please sign up or login with your details

Forgot password? Click here to reset