Exploring the Limits of Language Modeling

02/07/2016
by   Rafal Jozefowicz, et al.
0

In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/30/2019

Multiplicative Models for Recurrent Language Modeling

Recently, there has been interest in multiplicative recurrent neural net...
08/22/2017

Long-Short Range Context Neural Networks for Language Modeling

The goal of language modeling techniques is to capture the statistical a...
05/19/2023

Extending Memory for Language Modelling

Breakthroughs in deep learning and memory networks have made major advan...
12/23/2016

Language Modeling with Gated Convolutional Networks

The pre-dominant approach to language modeling to date is based on recur...
05/29/2021

Predictive Representation Learning for Language Modeling

To effectively perform the task of next-word prediction, long short-term...
08/27/2018

Predefined Sparseness in Recurrent Sequence Models

Inducing sparseness while training neural networks has been shown to yie...

Code Repositories

deepmark

THE Deep Learning Benchmarks


view repo

Please sign up or login with your details

Forgot password? Click here to reset