Online Representation Learning in Recurrent Neural Language Models

08/16/2015
by   Marek Rei, et al.
0

We investigate an extension of continuous online learning in recurrent neural network language models. The model keeps a separate vector representation of the current unit of text being processed and adaptively adjusts it after each prediction. The initial experiments give promising results, indicating that the method is able to increase language modelling accuracy, while also decreasing the parameters needed to store the model along with the computation required at each step.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2017

Slim Embedding Layers for Recurrent Neural Language Models

Recurrent neural language models are the state-of-the-art models for lan...
research
11/12/2015

Document Context Language Models

Text documents are structured on multiple levels of detail: individual w...
research
09/26/2017

Input-to-Output Gate to Improve RNN Language Models

This paper proposes a reinforcing method that refines the output layers ...
research
01/27/2019

Variational Smoothing in Recurrent Neural Network Language Models

We present a new theoretical perspective of data noising in recurrent ne...
research
09/06/2018

Code-switched Language Models Using Dual RNNs and Same-Source Pretraining

This work focuses on building language models (LMs) for code-switched te...
research
09/23/2021

Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?

Large language models are known to suffer from the hallucination problem...
research
04/05/2018

A Large-Scale Study of Language Models for Chord Prediction

We conduct a large-scale study of language models for chord prediction. ...

Please sign up or login with your details

Forgot password? Click here to reset