Efficient softmax approximation for GPUs

09/14/2016 ∙ by Edouard Grave, et al. ∙ 0

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational time by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/facebookresearch/adaptive-softmax.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

adaptive-softmax

Implements an efficient softmax approximation as described in the paper "Efficient softmax approximation for GPUs" (http://arxiv.org/abs/1609.04309)


view repo

tf-adaptive-softmax-lstm-lm

The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.


view repo

adaptive-softmax-pytorch

Adaptive Softmax implementation for PyTorch


view repo

AdaptiveSoftmax

This is an implement of Adaptive Softmax with pytorch.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.