Pointer Sentinel Mixture Models

by   Stephen Merity, et al.

Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.


Self-organized Hierarchical Softmax

We propose a new self-organizing hierarchical softmax formulation for ne...

Nonparametric Masked Language Modeling

Existing language models (LMs) predict tokens with a softmax over a fini...

Pointing the Unknown Words

The problem of rare and unknown words is an important issue that can pot...

Breaking the Softmax Bottleneck: A High-Rank RNN Language Model

We formulate language modeling as a matrix factorization problem, and sh...

emLam -- a Hungarian Language Modeling baseline

This paper aims to make up for the lack of documented baselines for Hung...

Improved Language Modeling by Decoding the Past

Highly regularized LSTMs that model the auto-regressive conditional fact...

Zero-th Order Algorithm for Softmax Attention Optimization

Large language models (LLMs) have brought about significant transformati...

Code Repositories


Chainer implementation of Pointer Sentinel Mixture Models

view repo

Please sign up or login with your details

Forgot password? Click here to reset