 # Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

Neural Ordinary Differential Equation (Neural ODE) has been proposed as a continuous approximation to the ResNet architecture. Some commonly used regularization mechanisms in discrete neural networks (e.g. dropout, Gaussian noise) are missing in current Neural ODE networks. In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which naturally incorporates various commonly used regularization mechanisms based on random noise injection. Our framework can model various types of noise injection frequently used in discrete networks for regularization purpose, such as dropout and additive/multiplicative noise in each block. We provide theoretical analysis explaining the improved robustness of Neural SDE models against input perturbations/adversarial attacks. Furthermore, we demonstrate that the Neural SDE network can achieve better generalization than the Neural ODE and is more resistant to adversarial and non-adversarial input perturbations.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Residual neural networks (ResNet)  are composed of multiple residual blocks transforming the hidden states according to:

 hn+1=hn+f(hn;wn), (1)

where is the input to the -th layer and is a non-linear function parameterized by . Recently, a continuous approximation to the ResNet architecture has been proposed , where the evolution of the hidden state can be described as a dynamic system obeying the equation:

 ht=hs+∫tsf(hτ,τ;w)\difτ, (2)

where is the continuous form of the nonlinear function ; and are hidden states at two different time . A standard ODE solver can be used to solve all the hidden states and final states (output from the neural network), starting from an initial state (input to the neural network). The continuous neural network described in (2) exhibits several advantages over its discrete counterpart described in (1), in terms of memory efficiency, parameter efficiency, explicit control of the numerical error of final output, etc.

One missing component in the current Neural ODE network is the various regularization mechanisms commonly employed in discrete neural networks. These regularization techniques have been demonstrated to be crucial in reducing generalization errors, and in improving the robustness of neural networks to adversarial attacks. Many of these regularization techniques are based on stochastic noise injection. For instance, dropout  is widely adopted to prevent overfitting; injecting Gaussian random noise during the forward propagation is effective in improving generalization [4, 5] as well as robustness to adversarial attacks [6, 7]. However, these regularization methods in discrete neural networks are not directly applicable to Neural ODE network, because Neural ODE network is a deterministic system.

Our work attempts to incorporate the above-mentioned stochastic noise injection based regularization mechanisms to the current Neural ODE network, to improve the generalization ability and the robustness of the network. In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which models stochastic noise injection by stochastic differential equations (SDE). In this new framework, we can employ existing techniques from the stability theory of SDE to study the robustness of neural networks. Our results provide theoretical insights to understanding why introducing stochasticity during neural network training and testing leads to improved robustness against adversarial attacks. Furthermore, we demonstrate that, by incorporating the noise injection regularization mechanism to the continuous neural network, we can reduce overfitting and achieve lower generalization error. For instance, on the CIFAR-10 dataset, we observe that the new Neural SDE can improve the test accuracy of the Neural ODE from 81.63% to 84.55%, with other factors unchanged. Our contributions can be summarized as follows:

• [noitemsep,leftmargin=*]

• We propose a new Stochastic Differential Equation (SDE) framework to incorporate randomness in continuous neural networks. The proposed random noise injection can be used as a drop-in component in any continuous neural networks. Our Neural SDE framework can model various types of noises widely used for regularization purpose in discrete networks, such as dropout (Bernoulli type) and Gaussian noise.

• Training the new SDE network requires developing different backpropagation approach from the Neural ODE network. We develop a new efficient backpropagation method to calculate the gradient, and to train the Neural SDE network in a scalable way. The proposed method has its roots in stochastic control theory.

• We carry out a theoretical analysis of the stability conditions of the Neural SDE network, to prove that the randomness introduced in the Neural SDE network can stabilize the dynamical system, which helps improve the robustness and generalization ability of the neural network.

• We verify by numerical experiments that stochastic noise injection in the SDE network can successfully regularize the continuous neural network models, and the proposed Neural SDE network achieves better robustness and improves generalization performance.

#### Notations:

Throughout this paper, we use to denote the hidden states in a neural network, where is the input (also called initial condition) and is the label. The residual block with parameters can be written as a nonlinear transform . We assume the integration is always taken from to . is -dimensional Brownian motion. is the diffusion matrix parameterized by . Unless stated explicitly, we use to represent

-norm for vector and Frobenius norm for matrix.

## 2 Related work

Our work is inspired by the success of the recent Neural ODE network, and we seek to improve the generalization and robustness of Neural ODE, by adding regularization mechanisms crucial to the success of discrete networks. Regularization mechanisms such as dropout cannot be easily incorporated in the Neural ODE due to its deterministic nature.

#### Neural ODE

The basic idea of Neural ODE is discussed in the previous section, here we briefly review relevant literature. The idea of formulating ResNet as a dynamic system was discussed in . A framework was proposed to link existing deep architectures with discretized numerical ODE solvers , and was shown to be parameter efficient. These networks adopt layer-wise architecture – each layer is parameterized by different independent weights. The Neural ODE model  computes hidden states in a different way: it directly models the dynamics of hidden states by an ODE solver, with the dynamics parameterized by a shared model. A memory efficient approach to compute the gradients by adjoint methods was developed, making it possible to train large, multi-scale generative networks [10, 11]. Our work can be regarded as an extension of this framework, with the purpose of incorporating a variety of noise-injection based regularization mechanisms. Stochastic differential equation in the context of neural network has been studied before, focusing either on understanding how dropout shapes the loss landscape , or on using stochastic differential equation as a universal function approximation tool to learn the solution of high dimensional PDEs . Instead, our work tries to explain why adding random noise boosts the stability of deep neural networks, and demonstrates the improved generalization and robustness.

#### Noisy Neural Networks

Adding random noise to different layers is a technique commonly employed in training neural networks. Dropout 

randomly disables some neurons to avoid overfitting, which can be viewed as multiplying hidden states with Bernoulli random variables. Stochastic depth neural network

 randomly drops some residual blocks of residual neural network during training time. Another successful regularization for ResNet is Shake-Shake regularization , which sets a binary random variable to randomly switch between two residual blocks during training. More recently, dropblock  was designed specifically for convolutional layers: unlike dropout, it drops some continuous regions rather than sparse points to hidden states. All of the above regularization techniques are proposed to improve generalization performance. One common characteristic of them is that they fix the network during testing time. There is another line of research that focuses on improving robustness to perturbations/adversarial attacks by noise injection. Among them, random self-ensemble [6, 7] adds Gaussian noise to hidden states during both training and testing time. In training time, it works as a regularizer to prevent overfitting; in testing time, the random noise is also helpful, which will be explained in this paper.

## 3 Neural Stochastic Differential Equation Figure 1: Toy example. By comparing the simulations under σ=0 and σ=2.8, we see adding noise to the system can be an effective way to control xt. Average over multiple runs is used to cancel out the volatility during the early stage.

In this section, we first introduce our proposed Neural SDE to improve the robustness of Neural ODE. Informally speaking, Neural SDE can be viewed as using randomness as a drop-in augmentation for Neural ODE, and it can include some widely used randomization layers such as dropout and Gaussian noise layer . However, solving Neural SDE is non-trivial, we derive the gradients of loss over model weights. Finally we theoretically analyze the stability conditions of Neural SDE.

Before delving into the multi-dimensional SDE, let’s first look at a 1-d toy example to see how SDE can solve the instability issue of ODE. Suppose we have a simple SDE, with be the standard Brownian motion. We provide a numerical simulation in Figure 1 for with different .

When we set , SDE becomes ODE and where is an integration constant. If we can see that . Furthermore, a small perturbation in will be amplified through . This clearly shows instability of ODE. On the other hand, if we instead make (the system is SDE), we have . Figure 2: Our model architecture. The initial value of SDE is the output of a convolutional layer, and the value at time Tis passed to a linear classifier after average pooling.

The toy example in Figure 1 reveals that the behavior of solution paths can change significantly after adding a stochastic term. This example is inspiring because we can control the impact of perturbations on the output by adding a stochastic term to neural networks.

Figure 2 shows a sample Neural SDE model architecture, and it is the one used in the experiment. It consists of three parts, the first part is a single convolution block, followed by a Neural SDE network (we will explain the detail of Neural SDE in Section 3.1) and lastly the linear classifier. We put most of the trainable parameters into the second part (Neural SDE), whereas the first/third parts are mainly for increasing/reducing the dimension as desired. Recall that both Neural ODE and SDE are dimension preserving.

### 3.1 Modeling randomness in neural networks

In the Neural ODE system (2), a slightly perturbed input state will be amplified in deep layers (as shown in Figure 1) which makes the system unstable to input perturbation and prone to overfitting. Randomness is an important component in discrete networks (e.g., dropout for regularization) to tackle this issue, however to our knowledge, there is no existing work concerning adding randomness in the continuous neural networks. And it is non-trivial to encode randomness in continuous neural networks, such as Neural ODE, as we need to consider how to add randomness so that to guarantee the robustness, and how to solve the continuous system efficiently. To solve these challenges, motivated by  [9, 12], we propose to add a single diffusion term into Neural ODE as:

 \difht=f(ht,t;w)\dift+G(ht,t;v)\difBt, (3)

where is the standard Brownian motion , which is a continuous time stochastic process such that follows Gaussian with mean

and variance

; is a transformation parameterized by

. This formula is quite general, and can include many existing randomness injection models with residual connections under different forms of

. As examples, we briefly list some of them below.

#### Gaussian noise injection:

Consider a simple example in (3) when is a diagonal matrix, and we can model both additive and multiplicative noise as

where is a diagonal matrix and its diagonal elements control the variance of the noise added to hidden states. This can be viewed as a continuous approximation of noise injection techniques in discrete neural network. For example, the discrete version of the additive noise can be written as

 hn+1=hn+f(hn;wn)+Σnzn,with\ \ Σn=σnI, zni.i.d.∼N(0,1), (5)

which injects Gaussian noise after each residual block. It has been shown that injecting small Gaussian noise can be viewed as a regularization in neural networks [4, 5]. Furthermore, [6, 7] recently showed that adding a slightly larger noise in one or all residual blocks can improve the adversarial robustness of neural networks. We will provide the stability analysis of (3) in Section 3.3, which provides a theoretical explanation towards the robustness of Neural SDE.

#### Dropout:

Our framework can also model the dropout layer which randomly disables some neurons in the residual blocks. Let us see how to unify dropout under our Neural SDE framework. First we notice that in the discrete case

 hn+1=hn+f(hn;wn)⊙γnp=hn+f(hn;wn)+f(hn;wn)⊙(γnp−I), (6)

where and indicates the Hadamard product. Note that we divide by in (6) to maintain the same expectation. Furthermore, we have

 γnp−I=√1−pp⋅√p1−p(γnp−I)≈√1−ppzn,zni.i.d.∼N(0,1). (7)

The boxed part above is approximated by standard normal distribution (by matching the first and second order moment). The final SDE with dropout can be obtained by combining (

6) with (7)

 \difht=f(ht,t;w)\dift+√1−ppf(ht,t;w)⊙\difBt. (8)

#### Others:

 includes some other stochastic layers that can be formulated under Neural SDE framework, including shake-shake regularization  and stochastic depth . Both of them are used as regularization techniques that work very similar to dropout.

### 3.2 Back-propagating through SDE integral

To optimize the parameters

, we need to back-propagate the Neural SDE system. A straightforward solution is to rely on the autograd method derived from chain rule. However, for Neural SDE the chain can be fairly long. If SDE solver discretizes the range

to intervals, then the chain has nodes and the memory cost is . One challenging part of backpropagation for Neural SDE is to calculate the gradient through SDE solver which could have high memory cost. To solve this issue, we first calculate the expected loss conditioning on the initial value , denoted as . Then our goal is to calculate . In fact, we have the following theorem (also called path-wise gradient [18, 19]).

###### Theorem 3.1.

For continuously differentiable loss

, we can obtain an unbiased gradient estimator as

 ˆ∂L∂w=∂ℓ(ht1)∂w=∂ℓ(ht1)∂ht1⋅∂ht1∂w. (9)

Moreover, if we define , then follows another SDE

 \difβt=(∂f(ht,t;w)∂w+∂f(ht,t;w)∂htβt)\dift+(∂G(ht,t;w)∂w+∂G(ht,t;w)∂htβt)\difBt. (10)

It is easy to check that if , then our Monte-Carlo gradient estimator (9) falls back to the exact gradient by back-propagation.

Similar to the adjoint method in Neural ODE, we will solve (10) jointly with the original SDE dynamics (3), this process can be done iteratively without memorizing the middle states, which makes it more memory efficient than autograd ( v.s. memory, is the number of steps determined by SDE solver).

### 3.3 Robustness of Neural SDE

In this section, we theoretically analyze the stability of Neural SDE, showing that the randomness term can indeed improve the robustness of the model against small input perturbation. This also explains why noise injection can improve the robustness in discrete networks, which has been observed in literature [6, 7]. First we need to show the existence and uniqueness of solution to (3), we pose following assumptions on drift and diffusion .

###### Assumption 1.

and are at most linear, i.e. for , and .

###### Assumption 2.

and are -Lipschitz: for , and .

Based on the above assumptions, we can show that the SDE (3) has a unique solution . We remark that assumption on is quite natural and is also enforced on the original Neural ODE model ; as to diffusion matrix , we have seen that for dropout, Gaussian noise injection and other random models, both assumptions are automatically satisfied as long as possesses the same regularities.

We analyze the dynamics of perturbation. Our analysis applies not only to the Neural SDE model but also to Neural ODE model, by setting the diffusion term to zero. First of all, we consider initializing our Neural SDE (3) at two slightly different values and , where is the perturbation for with . So, under the new perturbed initialization , the hidden state at time follows the same SDE in (3),

 \difhet=f(het,t;w)\dift+G(het,t;v)\difB′t,with he0=h0+ε0, (11)

where is Brownian motions for the SDE associated with initialization . Then it is natural to analyze how the perturbation evolves in the long run. Subtracting (3) from (11)

 \difεt =[f(het,t;w)−f(ht,t;w)]\dift+[G(het,t;v)−G(ht,t;v)]\difBt (12) =fΔ(εt,t;w)\dift+GΔ(εt,t;v)\difBt.

Here we made an implicit assumption that the Brownian motions and have the same sample path for both initialization and , i.e. w.p.1. In other words, we focus on the difference of two random processes and driven by the same underlying Brownian motion. So it is valid to subtract the diffusion terms.

An important property of (12) is that it admits a trivial solution , and . We show that both the drift () and diffusion () are zero under this solution:

 fΔ(0,t;w) =f(ht+0,t;w)−f(ht,t;w)=0, (13) GΔ(0,t;v) =G(ht+0,t;w)−G(ht,t;w)=0.

The implication of zero solution is clear: for a neural network, if we do not perturb the input data, then the output will never change. However, the solution can be highly unstable, in the sense that for an arbitrarily small perturbation at initialization, the change of output can be arbitrarily bad. On the other hand, as shown below, by choosing the diffusion term properly, we can always control within a small range.

In general, we cannot get the closed form solution to a multidimensional SDE but we can still analyze the asymptotic stability through the dynamics and . This is essentially an extension of Lyapunov stability theory to a stochastic system. First we define the notion of stability in the stochastic case. Let

be a complete probability space with filtration

and be an -dimensional Brownian motion defined in the probability space, we consider the SDE in Eq. (12) with initial value

 \difεt=fΔ(εt,t)\dift+GΔ(εt,t)\difBt, (14)

For simplicity we dropped the dependency on parameters and . We further assume and are both Borel measurable. We can show that if assumptions (1) and (2) hold for and , then they hold for and as well (see Lemma A.1 in Appendix), and we know the SDE (14) allows a unique solution . We have the following Lynapunov stability results from .

###### Definition 3.1 (Lyapunov stability of SDE).

The solution of (14):

• [noitemsep,leftmargin=*]

• is stochastically stable if for any and , there exists a such that whenever . Moreover, if for any , there exists a such that whenever , it is said to be stochastically asymptotically stable;

• is almost surely exponentially stable if a.s.111“a.s.” is the abbreviation for “almost surely”. for all .

Note that for part A in Definition 3.1, it is hard to quantify how well the stability is and how fast the solution reaches equilibrium. In addition, under assumptions (1, 2), we have a straightforward result whenever as shown in Appendix (see Lemma A.2). That is, almost all the sample paths starting from a non-zero initialization can never reach zero due to Brownian motion. On the contrary, the almost sure exponentially stability result implies that almost all the sample paths of the solution will be close to zero exponentially fast. We present the following theorem from  on the almost sure exponentially stability.

###### Theorem 3.2.

 If there exists a non-negative real valued function defined on that has continuous partial derivatives

 V1(ε,t)\coloneqq∂V(ε,t)∂ε,V2(ε,t)\coloneqq∂V(ε,t)∂t,V1,1(ε,t)\coloneqq∂2V(ε,t)∂ε∂ε⊤

and constants such that the following inequalities hold:

1. [noitemsep,leftmargin=*]

for all and . Then for all ,

 limsupt→∞1tlog∥εt∥≤−c3−2c22pa.s. (15)

In particular, if , the solution is almost surely exponentially stable.

We now consider a special case, when the noise is multiplicative and . The corresponding SDE of perturbation has the following form

 \difεt=fΔ(εt,t;w)\dift+σ⋅εt\difBt. (16)

Note that for the deterministic case of (16) by setting , the solution may not be stable in certain cases (see Figure 1). Whereas for general cases when , following corollary claims that by setting properly, we will achieve an (almost surely) exponentially stable system.

###### Corollary 3.2.1.

For (16), if is -Lipschtiz continuous w.r.t. , then (16) has a unique solution with the property almost surely for any . In particular, if , the solution is almost surely exponentially stable.

## 4 Experimental Results

In this section we show the effectiveness of our Neural SDE framework in terms of generalization, non-adversarial robustness and adversarial robustness. We use the SDE model architecture illustrated in Figure 2 during the experiment. Throughout our experiments, we set to be a neural network with several convolution blocks. As to we have the following choices:

• [noitemsep,leftmargin=*]

• Neural ODE, this can be done by dropping the diffusion term .

• Additive noise, when the diffusion term is independent of , here we simply set it to be diagonal .

• Multiplicative noise, when the diffusion term is proportional to , or .

• Dropout noise, when the diffusion term is proportional to the drift term , i.e. .

Note the last three are our proposed Neural SDE with different types of randomness as explained in Section 3.1.

### 4.1 Generalization Performance

In the first experiment, we show small noise helps generalization. However, note that our noise injection is different from randomness layer in the discrete case, for instance, dropout layer adds Bernoulli noise at training time, but the layer are then fixed at testing time; whereas our Neural SDE model keeps randomness at testing time and takes the average of multiple forward propagation.

As for datasets, we choose CIFAR-10, STL-10 and Tiny-ImageNet

222Downloaded from https://tiny-imagenet.herokuapp.com/ to include various sizes and number of classes. The experimental results are shown in Table 1. We see that for all datasets, Neural SDE consistently outperforms ODE, and the reason is that adding moderate noise to the models at training time can act as a regularizer and thus improves testing accuracy. Based upon that, if we further keep testing time noise and ensemble the outputs, we will obtain even better results.

In this experiment, we aim at evaluating the robustness of models under non-adversarial corruptions following the idea of . The corrupted datasets contain tens of defects in photography including motion blur, Gaussian noise, fog etc. For each noise type, we run Neural ODE and Neural SDE with dropout noise, and gather the testing accuracy. The final results are reported by mean accuracy (mAcc) in Table 2 by changing the level of corruption. Both models are trained on completely clean data, which means the corrupted images are not visible to them during the training stage, nor could they augment the training set with the same types of corruptions. From the table, we can see that Neural SDE performs better than Neural ODE in 8 out of 10 cases. For the rest two, both ODE and SDE are performing very close. This shows that our proposed Neural SDE can improve the robustness of Neural ODE under non-adversarial corrupted data. Figure 3: Comparing the robustness against ℓ2-norm constrained adversarial perturbations, on CIFAR-10 (left), STL-10 (middle) and Tiny-ImageNet (right) data. We evaluate testing accuracy with three models, namely Neural ODE, Neural SDE with multiplicative noise and dropout noise.

Next, we consider the performance of Neural SDE models under adversarial perturbation. Clearly, this scenario is strictly harder than previous cases: by design, the adversarial perturbations are guaranteed to be the worst case within a small neighborhood (ignoring the suboptimality of optimization algorithms) crafted through constrained loss maximization procedure, so it represents the worst case performance. In our experiment, we adopt multi-step -PGD attack , although other strong white-box attacks such as C&W  are also suitable. The experimental results are shown in Figure 3. As we can see both Neural SDE with multiplicative noise and dropout noise are more resistant to adversarial attack than Neural ODE, and dropout noise outperforms multiplicative noise.

### 4.4 Visualizing the perturbations of hidden states Figure 4: Comparing the perturbations of hidden states, εt, on both ODE and SDE (we choose dropout-style noise).

In this experiment, we take a look at the perturbation at any time . Recall the 1-d toy example in Figure 1, we can observe that the perturbation at time can be well suppressed by adding a strong diffusion term, which is also confirmed by theorem. However, it is still questionable whether the same phenomenon also exists in deep neural network since we cannot add very large noise to the network during training or testing time. If the noise is too large, it will also remove all useful features. Thus it becomes important to make sure that this will not happen to our models. To this end we first sample an input from CIFAR-10 and gather all the hidden states at time . Then we perform regular PGD attack  and find the perturbation such that is an adversarial image, and feed the new data into network again so we get at the same time stamps as . Finally we plot the error w.r.t. time (also called “network depth”), shown in Figure 4. We can observe that by adding a diffusion term (dropout-style noise), the error accumulates much slower than ordinary Neural ODE model.

## 5 Conclusion

To conclude, we introduce the Neural SDE model which can stabilize the prediction of Neural ODE by injecting stochastic noise. Our model can achieve better generalization and improve the robustness to both adversarial and non-adversarial noises.

## Acknowledgement

We acknowledge the support by NSF IIS1719097, Intel, Google Cloud and AITRICS.

## Appendix A Proofs

We present the proofs of theorems on stability of SDE. The proofs are adapted from . We start with two crucial lemmas.

###### Lemma A.1.

If satisfy Assumption (2), then satisfy Assumption (1,2).

###### Proof.

By Assumption (2) on , we can obtain that for any

 ∥fΔ(ε,t)∥+∥GΔ(ε,t)∥≤c2∥ε∥≤c2(1+∥ε∥), ∥fΔ(ε,t)−fΔ(~ε,t)∥+∥GΔ(ε,t)−GΔ(~ε,t)∥≤c2∥ε−~ε∥.

This guarantees the uniqueness of the solution of (14). ∎

###### Lemma A.2.

For (14), whenever , .

###### Proof.

We prove it by contradiction. Let . Then if it is not true, there exists some such that . Therefore, we can find sufficiently large constant and such that . By Assumption 2 on and , there exists a positive constant such that

 ∥fΔ(ε,t)∥+∥GΔ(ε,t)∥≤Kθ∥ε∥, for all ∥ε∥≤θ and 0≤t≤T. (17)

Let . Then, for any and , we have

 LV(ε,t) =−∥ε∥−3ε⊤fΔ(ε,t)+12{−∥ε∥−3∥GΔ(ε,t)∥2+3∥ε∥−5∥ε⊤GΔ(ε,t)∥2} ≤∥ε∥−2∥fΔ(ε,t)∥+∥ε∥−3∥GΔ(ε,t)∥2 ≤Kθ∥ε∥−1+K2θ∥ε∥−1=Kθ(1+Kθ)V(ε,t), (18)

where the first inequality comes from Cauchy-Schwartz and the last one comes from (17). For any , we define the stopping time . Let . By Itô’s formula,

 =V(ε0,0)+E∫νδ0e−Kθ(1+Kθ)s[−Kθ(1+Kθ)V(εs,s)+LV(εs,s)]\difs≤∥ε0∥−1. (19)

Since and for any , then (19) implies

 (20)

Thus, . Letting , we obtain , which leads to a contradiction. ∎

### Proof of Theorem 3.2

We then prove Theorem 3.2. Clearly, (15) holds for since . For any , we have for all almost surely by Lemma A.2. Thus, by applying Itô’s formula and condition (2), we can show that for ,

 logV(εt,t)≤logV(ε0,0)+c2t+M(t)−12∫t0|V1(εs,s)GΔ(εs,s)|2V2(εs,s)\difs. (21)

where is a continuous martingale with initial value . By the exponential martingale inequality, for any arbitrary and , we have

 Pr{sup0≤t≤n[M(t)−α2∫t0|V1(εs,s)GΔ(εs,s)|2V2(εs,s)\difs]>2αlogn}≤1n2. (22)

Applying Borel-Cantelli lemma, we can get that for almost all , there exists an integer such that if ,

 M(t)≤2αlogn+α2∫t0|V1(εs,s)GΔ(εs,s)|2V2(εs,s)\difs,∀ 0≤t≤n. (23)

Combining (21), (23) and condition (3), we can obtain that

 logV(εt,t)≤logV(ε0,0)−12[(1−α)c3−2c2]t+2αlogn. (24)

for all and almost surely. Therefore, for almost all , if and , we have

 1tlogV(εt,t)≤−12[(1−α)c3−2c2]+logV(ε0,0)+2αlognn−1 (25)

which consequently implies

 limsupt→∞1tlogV(εt,t)≤−12[(1−α)c3−2c2])a.s. (26)

With condition (1) and arbitrary choice of , we can obtain (15).

### Proof of Corollary 3.2.1

We apply Theorem 3.2 to establish the theories on stability of (16). Note that is -Lipschitz continuous w.r.t and . Then, (16) has a unique solution, with and satisfying Assumptions (1,2)

 ∥fΔ(εt,t)∥+∥GΔ(εt,t)∥≤max{L,σ}∥εt∥≤max{L,σ}(1+∥εt∥), ∥fΔ(εt,t)−fΔ(~εt,t)∥+∥GΔ(εt,t)−GΔ(~εt,t)∥≤max{L,σ}∥εt−~εt∥.

To apply Theorem 3.2, let . Then,

 LV(ε,t)=2ε⊤fΔ(ε,t)+σ2∥ε∥2≤(2L+σ2)∥ε∥2=(2L+σ2)V(ε,t), ∥V1(ε,t)GΔ(ε,t)∥2=4σ2V(ε,t)2.

Let . By Theorem 3.2, we finished the proof.