Input-to-Output Gate to Improve RNN Language Models

09/26/2017
by   Sho Takase, et al.
0

This paper proposes a reinforcing method that refines the output layers of existing Recurrent Neural Network (RNN) language models. We refer to our proposed method as Input-to-Output Gate (IOG). IOG has an extremely simple structure, and thus, can be easily combined with any RNN language models. Our experiments on the Penn Treebank and WikiText-2 datasets demonstrate that IOG consistently boosts the performance of several different types of current topline RNN language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2017

Slim Embedding Layers for Recurrent Neural Language Models

Recurrent neural language models are the state-of-the-art models for lan...
research
02/08/2021

The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

This paper discusses the current critique against neural network-based N...
research
04/21/2017

Improving Context Aware Language Models

Increased adaptability of RNN language models leads to improved predicti...
research
12/09/2022

Decomposing a Recurrent Neural Network into Modules for Enabling Reusability and Replacement

Can we take a recurrent neural network (RNN) trained to translate betwee...
research
08/16/2015

Online Representation Learning in Recurrent Neural Language Models

We investigate an extension of continuous online learning in recurrent n...
research
11/28/2016

Input Switched Affine Networks: An RNN Architecture Designed for Interpretability

There exist many problem domains where the interpretability of neural ne...
research
03/08/2023

Automatically Auditing Large Language Models via Discrete Optimization

Auditing large language models for unexpected behaviors is critical to p...

Please sign up or login with your details

Forgot password? Click here to reset