# Tikhonov Regularization for Long Short-Term Memory Networks

It is a well-known fact that adding noise to the input data often improves network performance. While the dropout technique may be a cause of memory loss, when it is applied to recurrent connections, Tikhonov regularization, which can be regarded as the training with additive noise, avoids this issue naturally, though it implies regularizer derivation for different architectures. In case of feedforward neural networks this is straightforward, while for networks with recurrent connections and complicated layers it leads to some difficulties. In this paper, a Tikhonov regularizer is derived for Long-Short Term Memory (LSTM) networks. Although it is independent of time for simplicity, it considers interaction between weights of the LSTM unit, which in theory makes it possible to regularize the unit with complicated dependences by using only one parameter that measures the input data perturbation. The regularizer that is proposed in this paper has three parameters: one to control the regularization process, and other two to maintain computation stability while the network is being trained. The theory developed in this paper can be applied to get such regularizers for different recurrent neural networks with Hadamard products and Lipschitz continuous functions.

There are no comments yet.

## Authors

• 2 publications
• ### Fast Weight Long Short-Term Memory

Associative memory using fast weights is a short-term memory mechanism t...
04/18/2018 ∙ by T. Anderson Keller, et al. ∙ 0

• ### How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

This work explores the feasibility of steering a drone with a (recurrent...
02/24/2017 ∙ by Klaas Kelchtermans, et al. ∙ 0

• ### Ensemble long short-term memory (EnLSTM) network

In this study, we propose an ensemble long short-term memory (EnLSTM) ne...
04/26/2020 ∙ by Yuntian Chen, et al. ∙ 0

• ### Recurrent babbling: evaluating the acquisition of grammar from limited input data

Recurrent Neural Networks (RNNs) have been shown to capture various aspe...
10/09/2020 ∙ by Ludovica Pannitto, et al. ∙ 19

• ### A Dynamically Controlled Recurrent Neural Network for Modeling Dynamical Systems

This work proposes a novel neural network architecture, called the Dynam...
10/31/2019 ∙ by Yiwei Fu, et al. ∙ 17

• ### Recurrent Neural Networks for Stochastic Control Problems with Delay

Stochastic control problems with delay are challenging due to the path-d...
01/05/2021 ∙ by Jiequn Han, et al. ∙ 0

• ### Bidirectional recurrent neural networks for seismic event detection

Real time, accurate passive seismic event detection is a critical safety...
12/05/2020 ∙ by Claire Birnie, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A recurrent neural network with the many-to-one architecture can be viewed as a mapping with a set of parameters , where is the set of indexes each of which is regarded as time when some , was taken. In this formulation, the input data are the following set of inputs: , the output data are .

One way to construct the mapping is to use LSTM units. The concept was introduced in Hochreiter1997LongMemory

as a remedy to vanishing gradient problem, refined in

Gers2000a and in later papers (see, for instance, Graves2005FramewiseArchitectures ; Jozefowicz2015AnArchitectures ). The LSTM unit has three gates: input, output, and forget ones that are used to control the data flow through the unit. Its input and the gates have parameters that must be trained with some regularization, which often improves network performance and prevents overfitting. Despite the tremendous performance gain for many applications and abundance of techniques to regularize networks, including dropout Hinton2012 ; Srivastava2014 , weight decay ( regularization), the standard regularization approach, Recurrent Neural Networks in general – and LSTMs in particular – may suffer from overfitting. The usage the techniques for feedforward neural networks is straightforward, though their application to RNNs leads to some difficulties. First, when dropout is applied to recurrent connections, it may cause the memory loss problem that the authors of Gal2016 ; Semeniuta2016 ; Zaremba2014 tried to avoid. Second, though regularization can be used, it is not obvious how it must be applied: whether one regularization parameter should be used or several ones to regularize differently the parts of the unit. Note the latter case is computationally intense than the former one, which leads to slower training, since it is necessary to get the optimal values.

It is feasible to address these problems by derivation a regularizer for LSTM unit that is based on the Tikhonov regularization technique. In Bishop1995TrainingRegularization it was shown that adding noise to initial data is equivalent to Tikhonov regularization. Almost at the time the authors of Wu1996ANetworks showed a possibility to apply the concept to recurrent neural networks, as the regularizer can be obtained by calculating the upper bound of the squared output disturbance , where and

is the independent random noise with zero mean and variance

.

In this paper, the upper bound is calculated to get a regularizer for LSTM networks in case of solving a regression task with the sum-of-squares objective, though the regularizer can be derived for any other loss.

The paper is organized as follows. Section 2 describes the network architecture and the regularizer, which is derived by assessing the upper bound of the output perturbation. Section 3 provides the theoretical justification for the form of the LSTM regularizer. Section 4 describes the learning procedure with the regularizer derived previously and the relaxed optimization problem. Section 5 concludes the paper.

## 2 The Output Perturbation

It is assumed that a layered network topology with the LSTM units is used. Since it is not important for further analysis which output is used, the standard dense layer with the sigmoid function is considered:

 M(I(x(t),l);θ)=y(t)=σ(Whyh(t)) (1)

The objective is to assess the upper bound of the output perturbation , which is the result of the input perturbation . Thus, it is possible to write the following equation for output perturbation:

 σ2y(t)=∥σ(Why^h(t))−σ(Whyh(t))∥2 (2)

Obviously, the upper bound for (2) can be found by applying the mean value theorem so the result can be written as follows:

 (3)

where denotes a point, which is somewhere in between and .

Considering that and for any point , it is possible to write the following equation.

 σ2y(t)≤α2∥Why∥2∥^h(t)−h(t)∥2. (4)

The output perturbation depends on the LSTM layer perturbation and on the dense layer parameters only. Therefore, the upper bound of it must be assessed to get the regularizer for the network.

## 3 The LSTM unit output perturbation

The LSTM unit Gers2000a can be described by using the following equations:

 h(t) =tanh(s(t))⊙o(t)=co(t)⊙o(t), (5) co(t) =tanh(s(t)), (6) o(t) =σ(Woxx(t)+Wohh(t−1)+bo)=σ(neto(t)), (7) s(t) =s(t−1)⊙f(t)+i(t)⊙ci(t), (8) f(t) =σ(Wfxx(t)+Wfhh(t−1)+bf)=σ(netf(t)), (9) i(t) =σ(Wixx(t)+Wihh(t−1)+bi)=σ(neti(t)), (10) ci(t) =tanh(Wcixx(t)+Wcihh(t−1)+bci)=tanh(netci(t)), (11)

where are the weight matrices and are the biases, , , , .

Considering the equations (1) and (5)-(11), it can be concluded that the objective model has the following set of parameters:

 θ={Why}∪{Wuv:u∈{i,o,f,ci},v∈{x,h}}∪{bu:u∈{i,o,f,ci}}. (12)

### 3.1 The upper bound of the recurrent connection perturbation

Before finding the upper bound of , it is necessary to prove the following

###### Proposition 1.

Suppose that is an open set in , such that contains the line segment from to and and are differentiable real-valued function on , then the upper bound of the difference of Hadamard products can be found as follows

 ∥f1(^a)⊙f2(^b)−f1(a)⊙f2(b)∥2≤∥diag(∥∇F(c)∥2)∥2(σ2a+σ2b), (13)

where for some points , , and perturbations and , , .

###### Proof.

Applying the mean value theorem to an

th component of the vector from the left side of the Equation (

13), it is possible to write that

 f1(^ai)f2(^bi)−f1(ai)f2(bi)=∇F(ci)⋅([^ai,^bi]T−[ai,bi]T).

Based on this fact, the norm of the difference of Hadamard products for two functions and can be rewritten as follows:

 ∥f1(^a)⊙f2(^b)−f1(a)⊙f2(b)∥2=n∑i=1(f1(^ai)f2(^bi)−f1(ai)f2(bi))2 (14)

Therefore,

 ∥f1(^a)⊙f2(^b)−f1(a)⊙f2(b)∥2=n∑i=1(∇F(ci)⋅([^ai,^bi]T−[ai,bi]T))2, (15)

Applying the Cauchy inequality, one can get

 ∥f1(^a)⊙f2(^b)−f1(a)⊙f2(b)∥2≤n∑i=1∥∇F(ci)∥2∥[^ai,^bi]T−[ai,bi]T∥2==n∑i=1∥∇F(ci)∥2(^ai−ai)2+∥∇F(ci)∥2(^bi−bi)2==∥diag(∥∇F(c)∥)(^a−a)∥2+∥diag(∥∇F(c)∥)(^b−b)∥2, (16)

Applying the Cauchy inequality again to the previously obtained equation, it is possible to get the desired result. ∎

Considering the equations (5), (6), (7), and the Proposition (1), it is possible to write the equation for as follows

 σ2h(t)≤β2(σ2s(t)+σ2neto(t)), (17)

where is assessed as

 β=maxξ(∥diag(∥∇G(ξ)∥)∥2)=17/16\lx@notefootnoteSince$G′ξ1(ξ)=tanh′(ξ1)σ(ξ2)$and$G′ξ2(ξ)=tanh(ξ1)σ′(ξ2)$., (18)

and , , is the line segment from to .

Considering the equation (17), it is possible to state that in order to minimize , it is necessary to assess the following two perturbations:

 σ2neto(t)=∥^neto(t)−neto(t)∥2 (19)

and

 σ2s(t)=∥^s(t)−s(t)∥2. (20)

### 3.2 The upper bound of the output gate perturbation

The upper bound of the output gate perturbation can be assessed by using the following

###### Proposition 2.

It holds that

 σneto(t)≤β∥Woh∥σs(t)+∥Wox∥σx(t)exp(1−β∥Woh∥),

where , , is the line segment from to .

###### Proof.

Let and be the following differences: , . Then applying the equation (7) to (19), we can get the following result:

 σ2neto(t)=∥Woxδx+Wohδh(t−1)∥2=∥Woxδx(t)+Woh(^co(t−1)⊙^o(t−1)−co(t−1)⊙o(t)))∥2=∥Woxδx(t)+Woh(^co(t−1)⊙σ(^neto(t−1))−co(t−1)⊙σ(neto(t−1)))∥2 (21)

Considering the equation (7) and the following fact

 dneto(t)dt=limτ→0neto(t+τ)−neto(t)τ; (22)

one can write the following dynamic function for some Wu1996ANetworks :

 neto(t)dt=Wohh(t)−neto(t)+Woxx(t′)+boτ, (23)

where .

If

then its derivative can be estimated as follows

 σ2neto(t)dt=2[^net(t)−net(t)]T⋅[^neto(t)dt−neto(t)dt] (24)

Therefore, considering that and , it is possible to get the following equation for its derivative:

 δneto(t)dt=limτ→0Wohδh(t)−δneto(t)+Woxδx(t′)τ, (25)

where .

Thus, applying the equation (25) to (24), the latter one can be rewritten as

 dσ2neto(t)dt=limτ→02(A+B−C)τ, (26)

where , , .

The upper bound of (24) can be found in the following three steps. First, consider the equation (5), the proposition 1, and the Hölder’s Inequality 222Since , then , then the upper bound of can be obtained as

 ∥A∥≤β∥Woh∥σneto(t)(σs(t)+σneto(t)). (27)

Second, the upper bound of is

 ∥B∥≤∥Wox∥σneto(t)σx(t′) (28)

Thus, after some simplifications the upper bound of (24) can be written as

 dσ2neto(t)dt≤2σneto(t)(aσneto(t)+bσs(t)+σx(t′)), (29)

where , , and for some .

Therefore,

 dσneto(t)dt≤aσneto(t)+bσs(t)+cσx(t′) (30)

Let us assume that the input perturbation and the memory perturbation are either constants or change more slowly than . Thus, the equation (30) can be rewritten as follows

 dσneto(t)dt≤aσneto(t)+d, (31)

where

Applying the Grönwall inequality (see, for instance, (Pachpatte1998, )) to the equation (31) for , one can get the upper bound of :

 σneto(t)≤τ(bσs(t)+cσx(t))exp(τa) (32)

Substituting the previously defined constants, we can end up with the upper bound of :

 σneto(t)≤β∥Woh∥σs(t)+∥Wox∥σx(t)exp(1−β∥Woh∥), (33)

which proves the proposition. ∎

It should be noted that for the purpose of computational stability it is assumed that .

### 3.3 The upper bound of the memory perturbation

In order to minimize the output gate perturbation, it is necessary to take into account the memory perturbation (20). This perturbation is the result of applying the following functions to the input data of the unit: the forget gate function (), the input gate function (), and the input of the unit function (). Thus, it is possible to write the following equation for the memory perturbation:

 σ2s(t)=∥^s(t)−s(t)∥2 (34) ^s(t)=s(t−1)⊙^f(t)+^i(t)⊙^ci(t) (35) s(t)=s(t−1)⊙f(t)+i(t)⊙ci(t) (36)

Therefore, the memory perturbation can be assessed by finding the upper bound of the following norm

 σ2s(t)=∥s(t−1)⊙δf(t)+δi,ci(t)∥2, (37)

where .

Considering the equation (37), it is possible to write the following

###### Proposition 3.

It holds that

 σs(t)≤γxσx(t)+γhσh(t), (38)

where , .

###### Proof.

Like in the proposition 2, it is possible to write the following equation for the derivative for :

 dσ2s(t)dt=2[^s(t)−s(t)]T[d^s(t)dt−ds(t)dt] (39)

First, it is necessary to rewrite the second part of the equation by using the following one:

 d^s(t)dt=limτ→0s(t)⊙^f(t′)+^i(t′)⊙^ci(t′)−^s(t)τ (40)
 ds(t)dt=limτ→0s(t)⊙f(t′)+i(t′)⊙ci(t′)−s(t)τ (41)

Therefore, the second part of (39) can be rewritten by using the equations (40) and (41) as follows:

 dδs(t)dt=1τ(s(t)⊙δf(t′)+δi,ci(t′)−δs(t)), (42)

where , , .

Considering the equation (42), the equation (39) can be rewritten as follows:

 dσ2s(t)dt=2τδs(t)T(s(t)⊙δf(t′)+δi,ci(t′))−2τδ2s(t) (43)

Due to the fact that , it is possible to apply Lemma 2 from Pachpatte1996 to get the following equation:

 dσ2s(t)dt≤2τδs(t)T(δf(t′)+δi,ci(t′))−2τδ2s(t) (44)

Applying the mean value theorem, we can find an upper bound of the first part of the equation as follows:

 ∥δf(t′)∥≤γf(∥Wfx∥σx(t′)+∥Wfh∥σh(t)), (45)

where is the recurrent connection perturbation, , denotes a point, which is between and , ; therefore .

Applying Proposition 1, the upper bound of the second part squared can be assessed as

 ∥δi,ci(t′)∥2≤γ2i,ci(σ2neti(t′)+σ2netci(t′)), (46)

where , , ; therefore, .

By applying a previously used inequality (), it is possible to write that

 ∥δi,ci(t′)∥≤γi,ci(σx(t′)(∥Wix∥+∥Wcix∥)+σh(t)(∥Wih∥+∥Wcih∥)) (47)

Therefore, the upper bound of (39) is

 dσ2s(t)dt≤2τ(σs(t)(γxσx(t′)+γhσh(t))−σ2s(t)), (48)

where ,

Applying Lemma 2 from Pachpatte1996 , the upper bound of can be calculated as follows

 dσs(t)dt≤1τ(γs−σs(t))≤1τγs, (49)

where .

Assuming that and are either constants or change more slowly than , one can get the following result for the interval :

 σs(t)≤γs, (50)

which proves the proposition ∎

### 3.4 The upper bound of the output perturbation

Based on Proposition 3, it is possible to rewrite equation (17) as follows:

 σ2h(t)≤ρ2x(θ)σ2x(t)+ρ2h(θ)σ2h(t) (51)

where and are independent of the time variable and can be calculated based on the parameters of the model only:

 ρx(θ)=√2(¯¯¯γx+√2(β¯¯¯γx∥Woh∥+∥Wox∥)exp(1−β∥Woh∥)), (52)
 ρh(θ)=√2(¯¯¯γh+β¯¯¯γh∥Woh∥exp(1−β∥Woh∥)), (53)

After some simplifications, it is possible to conclude that

 σ2h(t)≤ρ2x(θ)1−ρ2h(θ)σ2x(t),s.t. ρ2h(θ)<1;β∥Woh∥≤1. (54)

## 4 The Learning Procedure

Considering (54) and the upper bound of the output perturbation:

 σ2y(t)≤∥Why∥2ρ2x(θ)1−ρ2h(θ)σ2x(t), (55)

the regularizer can be written as follows

 (56)

where is a constant that measures the degree of the input perturbation.

It is possible to relax this problem to get the following objective function:

 L(θ)=MSE(θ)+R(θ)+λ1(ρ2h(θ)−1)++λ2(γ∥Woh∥−1)+, (57)

where and , , and are the parameters that must be assessed during the training procedure based on the model evaluation criterion.

The complex regularizer that is the right part of (57) has three parameters: , which is the main parameter of the regularization, and , , which are used to maintain computation stability during the training.

## 5 Conclusion

In this paper, the Tikhonov regularizer is derived for the LSTM unit by finding the upper bound of the output perturbation, which is the difference between the actual output of the network and the one that is observed if the noise is added to the inputs of the network. The regularizer has three parameters: the first one measures the degree of input perturbation, thus it controls the regularization process, the other two are used to maintain computation stability of the regularization. The regularizer can be used to approach the overfitting problem in LSTM networks by taking into account not only the weights of the gates independently, but also the interaction between them as parts of the LSTM complex structure. The mathematical justification of the proposed regularization derivation is provided, which enables to get regularizers for different architectures.

## References

• [1] Chris M. Bishop. Training with Noise is Equivalent to Tikhonov Regularization. Neural Computation, 7(1):108–116, jan 1995.
• [2] Yarin Gal and Zoubin Ghahramani. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. 2016.
• [3] Felix A. Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to Forget: Continual Prediction with LSTM. Neural Computation, 12(10):2451–2471, oct 2000.
• [4] Alex Graves and Jürgen Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6):602–610, jul 2005.
• [5] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. jul 2012.
• [6] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, nov 1997.
• [7] R. Jozefowicz, W. Zaremba, and I. Sutskever. An empirical exploration of recurrent network architectures. 2015.
• [8] B G Pachpatte. Comparision Theorems Related to Certain Inequality used in the Theory of Differential Equations. Soochow Journal Of Mathematics, 22(3):383–394, 1996.
• [9] B. G. Pachpatte. Inequalities for differential and integral equations. Academic Press, 1998.
• [10] Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent Dropout without Memory Loss. 2016.
• [11] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting.

Journal of Machine Learning Research

, 15:1929–1958, 2014.
• [12] Lizhong Wu and John Moody. A Smoothing Regularizer for Feedforward and Recurrent Neural Networks. Neural Computation, 8(3):461–489, apr 1996.
• [13] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent Neural Network Regularization. 27(3):100, 2014.