On the overfly algorithm in deep learning of neural networks

07/27/2018
by   Alexei Tsygvintsev, et al.
0

In this paper we investigate the supervised backpropagation training of multilayer neural networks from a dynamical systems point of view. We discuss some links with the qualitative theory of differential equations and introduce the overfly algorithm to tackle the local minima problem. Our approach is based on the existence of first integrals of the generalised gradient system with build-in dissipation.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/07/2019

Predict Globally, Correct Locally: Parallel-in-Time Optimal Control of Neural Networks

The links between optimal control of dynamical systems and neural networ...
05/18/2021

Learning stochastic dynamical systems with neural networks mimicking the Euler-Maruyama scheme

Stochastic differential equations (SDEs) are one of the most important r...
08/09/2022

Interpretable Polynomial Neural Ordinary Differential Equations

Neural networks have the ability to serve as universal function approxim...
12/28/2021

Continuous limits of residual neural networks in case of large input data

Residual deep neural networks (ResNets) are mathematically described as ...
05/10/2018

Training Recurrent Neural Networks via Dynamical Trajectory-Based Optimization

This paper introduces a new method to train recurrent neural networks us...
08/25/2020

Theory of Deep Q-Learning: A Dynamical Systems Perspective

Deep Q-Learning is an important algorithm, used to solve sequential deci...
10/27/2017

Multi-level Residual Networks from Dynamical Systems View

Deep residual networks (ResNets) and their variants are widely used in m...

1. Introduction. The dynamics of gradient flow. Neural networks and backpropagation.

Let be a smooth function in some open domain . We equip with the topology induced by the standard Euclidean norm defined by the canonical scalar product

. The gradient vector field defined in

by is given by , where are canonical coordinates in . The critical points of are the solutions of , . Let be the set of all critical points of in (which can be unbounded and/or contain non–isolated points).

The following theorem [10], [19] is a classical result describing the asymptotic behaviour of solutions of the gradient differential system:

(1.1)
Theorem 1.1.

Let be the initial condition of (1.1). Then every solution , either leaves all compact subsets of or approaches as the critical set i.e

(1.2)

In particular, at regular points, the trajectories of (1.1) cross the level surfaces of orthogonally and isolated minima of (which is a Lyapunov function [14] ) are asymptotically equilibrium points.

Under the additional analyticity condition the above convergence result can be made stronger:

Theorem 1.2.

(Absila,Kurdyka, [3]) Let be real analytic in . Then is a local minimum of iff it is asymptotically stable equilibrium point of (1.1).

It should be noticed that the gradient system (1.1) can not have any non–constant periodic or recurrent solutions, homoclinic orbits or heteroclitic cycles. Thus, trajectories of gradient dynamical systems have quite simple asymptotic behaviour.

Nevertheless, the localisation of basin of attraction of any equilibrium point (stable or saddle one) belonging to is a non trivial problem.

Supervised machine learning in multi–layered neural networks can be considered as application of gradient descent method in a non–convex optimization problem. The corresponding cost (or error) functions are of the general form

(1.3)

with data set and a certain highly non–linear function containing the weights . The main problem of the machine learning is to minimize the cost function with a suitable choice of weights . A gradient method, described above and called backpropagation in the context of neural network training, can get stuck in local minima or take very long time to run in order to optimize . This is due to the fact that general properties of the cost surface are usually unknown and only the trial and error numerical methods are available (see [4], [12], [16], [9], [17], [18], [5])). No theoretical approach is known to provide the exact initial weights in backpropagation with guaranteed convergence to the global minima of . One of most powerful techniques used in backpropagation is the adaptive learning rate selection [8] where the step size of iterations is gradually raised in order to escape a local minimum. Another approach is based on random initialization [15] of weights in order to fortunately select them to be close to the values that give the global minimum of the cost function. The deterministic approach, called global descent, was proposed in [7] where optimization was formulated in terms of the flow of a special deterministic dynamical system.

The present work seeks to integrate the ideas from the theory of ordinary differential equations to enrich the theoretical framework and assist in better understanding the nature of convergence in the training of multi–layered neural networks. The principal contribution is to propose the natural extension of classical gradient descent method by adding new degrees of freedom and reformulating the problem in the new extended phase space of higher dimension. We argue that this brings a deeper insight into the convergence problem since new equation become simpler algebraically and admit a family of known first integrals. While this proposal may seem radical, we believe that it offers a number of advantages on both theoretical and as numerical levels as our experiments clearly show. Common sense suggests that embedding the dynamics of a gradient flow in a more general phase space of a new more general dynamical system is always advantageous since it can bring new possibilities to improve the convergence and escape local minima by embedding the cost surface into the higher dimensional phase space.

The study is divided into three parts. In Section 2 we begin by reminding how the gradient descent method is applied to train the simplest possible neural network with only output layer. That corresponds to the conventional backpropagation algorithm known for its simplicity and which is frequently used in deep learning. Next we introduce a natural extension of the gradient system which is done by replacing the weights of individual neurones within the output layer by their nonlinear outputs. That brings more complexity to the iterative method, since the number of parameters rises considerably, but at the same time, the training data becomes built up into network in a quite natural way. The so obtained generalised gradient system is later converted to the observer one (see

[6]). The aim is to turn the constant level of known first integrals into the attractor set. We will explain how the Euler iterative method, applied to the observer system, and called overfly algorithm, is involved in achieving of convergence to the global minimum of the cost function. Sections 3 and 4 discuss the applications of this algorithm in training of –layer and multilayer networks. The objective is to put forward an explanation of how to expand the backpropagation algorithm to its overfly version via modifying the weights updating procedure only for the first network’s layer. In Section 5 we provide concrete numerical examples to illustrate the efficacy of the overfly algorithm in training of some particular neural networks.

2. Neural network without hidden layers

In this section we give an elementary algebraic description of the simplest no hidden layer neural network called also a perceptron (see [11]).

We define the sigmoid function

(2.1)

as a particular solution of the logistic algebraic differential equation:

(2.2)

In particular, is increasing and rapidly convergent map as .

Let and be two vectors called respectively weight and input ones . The analytic map defined by

(2.3)

is called a no hidden layer neural network.

Let

(2.4)

be the training set of (2.3) containing input data vectors and corresponding scalar output values . We want to determine the weight vector so that the values match outputs as better as possible. That can be achieved by minimising the so called cost function

(2.5)

or, after the substitution of (2.3):

(2.6)

In general, is not coercive and not necessarily convex map.

To apply the gradient descent method one considers the following system of differential equations

(2.7)

Since is always decreasing along the trajectories of (2.7), it is natural to solve it starting from some initial point and use to minimise . The solution can converge (in the ideal case) to the global minimum of or, in the less favourable case, or converges to local minima or saddle points.

The backpropagation method [11] for a neural network can be viewed as the Euler numerical method [13] of solving of a gradient system (2.7).

Here one approximates the time derivative by its discrete version

(2.8)

for some small step so that the approximative solution of (2.7) at time can be obtained by iterations:

(2.9)

We write (2.7) in a more simple algebraic form by introducing the additional variables

(2.10)

representing the nonlinear outputs of the network for given inputs of the training set. Using the equations (2.7) to compute the derivatives , one obtains the following system of differential equations

(2.11)

with – the symmetric Gram matrix. We call (2.11) the generalised gradient system.

Let be matrix defined by . Then and, as known from the elementary linear algebra: and . Since the number of training vectors usually exceeds the total number of weights of the network, we can assume that .

Thus, since , we have .

Let be a non–zero vector from the null space of and . As seen from the equations (2.11), is invariant under the flow of the system. Indeed, and are invariant hypersurfaces.

Theorem 2.1.

The function

(2.12)

is a real analytic first integral of the system (2.11).

There exists functionally independent first integrals of the above form.

Proof.

The first statement can be checked straightforwardly by derivation of (2.12) using (2.11). We notice that if then . Thus, one has the real analyticity property of . The linearity and functional independency of , follow directly from the definition (2.12). ∎

In the rest of the paper we will always assume that i.e the set contains sufficiently many independent vectors.

Let , be the basis of . Using the vector notation

(2.13)

the family of the first integrals given by Theorem 2.1 can be written simply as

(2.14)

Let , be the map defined by

(2.15)
Lemma 2.1.

is a submersion.

Proof.

This follows directly from the fact that

are linearly independent vectors and (

2.14). ∎

Thus, for all the set is a –dimensional invariant manifold for the system (2.11).

Lemma 2.2.

is diffeomorphic to .

Proof.

Let . We define the map by

(2.16)

Then, and so .

To show that is invertible, let us fix . Since is one to one, there exists unique vector , such that , for and

(2.17)

because by substitution into (2.13).

We are looking now for the solution of the linear system , which can be written in the vector form as . The linear map , has . Moreover, where orthogonality is defined by the scalar product . Indeed, , by the direct verification, and by the rank–nullity theorem. Hence, the map is a linear bijection and the linear equation admits the unique solution since as follows from (2.17). The proof is done. ∎

The system (2.11) can be written in the vector form as where is a complete in vector field ( is a bounded open invariant set). Let and

(2.18)

be the –neighbourhood of . Together with (2.11), consider the following observer system

(2.19)

where

(2.20)

. Here, , and

(2.21)

The matrix is invertible and positive definite since . Thus, the vector field is well defined in .

Theorem 2.2.

Let and be the solution of the observer system (2.19) with the initial condition . Then

(2.22)

with defined in (2.18). In particular and is invariant set containing as attractor.

Proof.

Firstly, we write the introduced in (2.15) in the compact matrix form

We follow now the idea of the proof of Main Lemma from [6], p. 377. and derive with respect to time along the solution of (2.19) to obtain a simple differential equation:

(2.23)

which can be easily solved to get (2.22). ∎

We notice that our choice of the term in (2.19) is different from one proposed in [6].

Lemma 2.3.

The function

(2.24)

is a Lyapunov one and verifies for every solution , of (2.11).

Proof.

It is sufficient to derive and to use the positiveness of the Gram matrix . ∎

Now we shall explain the role of the observer system (2.19) in the problem of minimisation of the cost function (2.5).

Firstly, while using the standard gradient descent method, instead of dealing with the system (2.7), one can solve the observer equations (2.19) with some initial condition and use then Lemma 2.2 to compute as corresponding to for some sufficiently large . It is well known that applying the Euler method (2.8) to solve (2.7), i.e following the conventional backpropagation algorithm, leads to accumulation of a global error proportional to the step size . At the same time, the numerical integration of the observer system (2.19), as due to the existence of the attractor set , is much more stable numerically since the solution is attracted by the integral manifold (see [6] for more details and examples).

Second improvement brought by the observer system (2.19) is more promising. Imagine we start integration of (2.19) with the perturbed initial condition , for some . Then, according to Theorem 2.2, , and as follows from Lemma 2.3, will be decreasing function of in a neighbourhood of since on . That can be seen as a coexistence of the local dynamics of the observer system in , pushing to the equilibrium point , of (2.7) and the dynamics of the gradient system (2.7) on forcing to approach the critical points set (see Figure 3).

One can suggest that this kind of double dynamics increases considerably the chances of convergence to the global minimum of the cost function (2.5). We call overfly the training of the neural network (2.3) done by solving the observer system (2.19) with help of the Euler first–order method starting from some initial point .

3. The –hidden layer network case

In this section we describe the generalised gradient system of differential equations appearing in the supervised backpropagation training of a –hidden layer network. As in the previous section, let belongs to the training set (2.4). Let be weight vectors of the hidden layer and is the weight vector of the output layer.

The –hidden layer neural network is a real analytic map defined as follows

(3.1)

where are the outputs of the first layer. We want to minimise the same cost function

(3.2)

where , is the training set. To solve the optimisation problem one can define the gradient system analogous to (2.7) with respect to the vector variables and :

(3.3)

Let us introduce the following scalar variables:

(3.4)

The function (3.2), expressed in new variables, takes the following form

(3.5)

The differential equations describing the generalised gradient system for the neural network (3.1) are obtained by derivation of (3.4) with help of (3.3):

(3.6)

where is the Gram matrix defined by the training set (2.4).

The next theorem is a generalisation of Theorem 2.1. Let and , .

Theorem 3.1.

The generalised gradient system admits functionally independent first integrals

(3.7)

The cost function defined by (3.5) is a Lyapunov function for

Proof.

One verifies directly that is a first integral of by simple derivation. A rather tedious but elementary calculation shows that along the solutions of (see also Theorem 4.1 for the general proof). ∎

The observer system, analogous to (2.19), written for the generalised gradient system (3.6), can be obtained straightforwardly by replacing the first equation of (3.6) with

(3.8)

where the additional term is defined in similar to (2.20) way with help of the first integrals defined by Theorem 3.1.

Indeed, let and are two matrices defined by

(3.9)

To prove the result similar to Theorem 2.2 one can define in (3.8) as follows

(3.10)

where the constant matrix is the same as in (2.20) and “” is the Kronecker matrix product.

Indeed, the first integrals defined by (3.7) can be written in a matrix form: . Then, deriving , where is the Frobenius matrix norm, along a solution of (3.8), one gets

(3.11)

and so

(3.12)

The practical implementation of the overfly algorithm in the –layer case is analogous to one described in Section 2. Instead of modifying the weights of the first layer at every step of the gradient descent, one updates the values of and applying the Euler method to solve the observer equations (3.8).

For the sake of simplicity, we will provide below the explicit matrix form of the system (3.8) which is better adopted to numerical implementations. We introduce the following diagonal matrices:

(3.13)

and the –vector

(3.14)

Let be the –vector of the output layer. The observer system (3.8) can be written in the following compact form

(3.15)

where .

4. General multilayer case

We want to analyse a general multilayer neuronal network with the architecture . Here is a number of inputs and

is the number of neurones in the very first layer. The network has only one output and in every layer the same sigmoid function (

2.1) is used. The training set is defined by (2.4). Let , be the weight vectors of neurones of the first layer. We note the weights of other network’s layers. Let be the input vector. The generic multilayer neural network can be written as the composition of two maps:

(4.1)

where , is defined jointly by all layers different from the first one and

(4.2)

is the output vector of the first layer.

Using the chain rule one obtains for every

:

(4.3)

where, according to (4.2),

(4.4)

Thus, combining together (4.3),(4.4) we obtain:

(4.5)

We can compute now the partial derivatives of the cost function

(4.6)

with respect to the weights of the first layer:

(4.7)

The equation of the gradient system corresponding to the weight vector can be written as

(4.8)

Introducing the variables

(4.9)

called the splitting weights, and whose derivatives can be found with help of (4.8), we deduce from (4.7) the following differential equations

(4.10)

where .

The above equations can be written also as

(4.11)

Indeed, and are functions of and only. Moreover, the same holds for the cost function defined in (4.6) and its gradient : they can be written as functions of variables and .

Let be the dimension of the null space of the Gram matrix and . We note .

Theorem 4.1.

Let

(4.12)

be the generalised gradient system written for the multilayer network (4.1) with the training set , . Then (4.12) admits independent first integrals of the form

(4.13)

The cost function (4.6) is a Lyapunov function for (4.12).

Proof.

It is straightforward to verify that are functionally independent first integrals of (4.12). Accordingly to (4.1), (4.2) and (4.9), the cost function (4.6), written in variables , , is given by

(4.14)

in view of (4.3),(4.2) and (4.9). Let be a solution of (4.12). Then

(4.15)

where , are the standard scalar products defined respectively in spaces and where is the total number of splitting weights and is the total number of weights of the neural network (4.1). One writes with help of (4.11):

(4.16)

where . Since is a positive matrix, the last equality implies . Together with (4.15) this yields that is a Lyapunov function of (4.12). ∎

The observer system, defined by analogy with (2.19) for the generalised gradient system (4.12), can be written in the following form

(4.17)

where the vector field , called the dissipation term, is defined by the first integrals (4.13) and given by the same formula (3.10).

The overfly algorithm for neural network training, already described in previous sections, can be easily adopted to the general multilayer case. The only difference from the conventional backpropagation applied to the network (4.1), consists in replacing the weights of the first layer by the splitting weights , while keeping updating the weights of other layers accordingly to the usual bacpropagation algorithm. At each iteration step, the evolution of parameters is governed by the Euler discretisation of the observer system (4.17).

5. Conclusion and numerical results

In this section we compare the usual backpropagation and the overfly methods for some particular neural networks. We start by a simple no hidden layer case (2.3).

We put and . Let and the input input values are defined by

(5.1)

with the corresponding output vector :

(5.2)

The couple defines the training set (2.4).

Analysing the equation , with defined in (2.5), one calculates, with help of Maple’s 10 RootFinding routine, two local minima and (see Figure1) of the cost function in points , and , with being the global minimum of .

The gradient system (2.7) was solved using the Euler method (2.9) with with the initial point . After iterations one obtains with and the backpropagation network converges to the local minimum .

To calculate the vector , corresponding to , one can apply Lemma 2.2 to find

(5.3)

Now, following the overfly approach, we consider the observer system (2.19) with and initial conditions with the perturbation vector defined by

(5.4)

The Euler method, applied to (2.19) with provides after iterations the value with . Since, is sufficiently close to we conclude that the overfly network converges to the global minimum rather than to the local one . So, the benefits of the overfly training are immediately visible.

We have tested numerically the overfly method for a neural network (3.1). It has inputs and hidden layer with neurones (). Both hidden and output layer have biases. The input data set has entries arranged into the following matrix :

The columns of were chosen randomly and have zero mean. The output target vector is of the form

(5.5)

and corresponds to a highly deviated data set. In particular:

(5.6)

Firstly, the standard neural network (3.1) was trained on the above data set using usual backpropagation method (BM) with randomly chosen in the interval weights and . The number of iterations was with the step size .

Then, the overfly algorithm was applied, as described in Section , with randomly chosen initial splitting weights , same and the dissipation parameter . The observer system was solved by Euler method with the same step size and using the same iteration number . At each iteration we computed the cost function value for both methods: using the formula (3.2) for BM and the expression (3.5) for the overfly method (OM). The final cost value, after iterations for BM, was and for OM it was with the ratio . Thus the overfly algorithm significantly outperforms the conventional backpropagation for this particular problem. The Figure 2 contains graphs of both cost functions in the logarithmic scale. We notice that our example is quite generic one since our numerical experiments show that statistically OM gives more precise results than BM for the large deviation output data sets.

We notice that there is an obvious resemblance between conventional backpropagation and overfly approaches. Below we summarise briefly the principal steps of the proposed method.

Step 1: Splitting. Assuming that the training data is given, firstly, it is necessary to compute the generating vectors of the null–space of the matrix i.e determine . Secondly, one introduces splitting weights (4.9) to replace weights of neurones of the first layer. In practice, the number of training examples can be considerably larger than the input size of the network , so the splitting brings more additional parameters to be stored in the memory.
Step 2: Dissipation. Using the vectors spanning one creates a procedure computing the dissipation term defined by (2.20). The matrix inversion in (2.20) can be done, in the beginning, using the conjugate gradient algorithm [