Adaptive Noise Injection: A Structure-Expanding Regularization for RNN

07/25/2019
by   Rui Li, et al.
0

The vanilla LSTM has become one of the most potential architectures in word-level language modeling, like other recurrent neural networks, overfitting is always a key barrier for its effectiveness. The existing noise-injected regularizations introduce the random noises of fixation intensity, which inhibits the learning of the RNN throughout the training process. In this paper, we propose a new structure-expanding regularization method called Adjective Noise Injection (ANI), which considers the output of an extra RNN branch as a kind of adaptive noises and injects it into the main-branch RNN output. Due to the adaptive noises can be improved as the training processes, its negative effects can be weakened and even transformed into a positive effect to further improve the expressiveness of the main-branch RNN. As a result, ANI can regularize the RNN in the early stage of training and further promoting its training performance in the later stage. We conduct experiments on three widely-used corpora: PTB, WT2, and WT103, whose results verify both the regularization and promoting the training performance functions of ANI. Furthermore, we design a series simulation experiments to explore the reasons that may lead to the regularization effect of ANI, and we find that in training process, the robustness against the parameter update errors can be strengthened when the LSTM equipped with ANI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2018

Training Recurrent Neural Networks against Noisy Computations during Inference

We explore the robustness of recurrent neural networks when the computat...
research
08/03/2017

Revisiting Activation Regularization for Language RNNs

Recurrent neural networks (RNNs) serve as a fundamental building block f...
research
09/27/2018

Introducing Noise in Decentralized Training of Neural Networks

It has been shown that injecting noise into the neural network weights d...
research
11/08/2019

Ruminating Word Representations with Random Noised Masker

We introduce a training method for both better word representation and p...
research
02/15/2021

On the Inherent Regularization Effects of Noise Injection During Training

Randomly perturbing networks during the training process is a commonly u...
research
10/28/2019

Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck

The ability for policies to generalize to new environments is key to the...
research
06/16/2022

Towards Robust Ranker for Text Retrieval

A ranker plays an indispensable role in the de facto 'retrieval rera...

Please sign up or login with your details

Forgot password? Click here to reset