1 Introduction
The theory of the Hamiltonian system is known as one of the dominant tools for the description of dynamic phenomenons in the field of physics and economics [1]. For example, in physics, the mechanical and electrical systems are usually represented as energy functions, which are at the same time Hamiltonian systems. Actually, the Hamiltonian system could reflect the laws of energy conservation and dissipation [1, 2].
A determined Hamiltonian system can be given as
(1.1) 
where is a given real function called the Hamiltonian, and are partial derivatives of with respect to and , respectively. When considering a terminal condition for a given function , (1.1) becomes a boundary problem.
For more complex environments where the physical system can not be represented with deterministic form, the Hamiltonian system is usually combined with a stochastic process, and the form of a stochastic Hamiltonian system with boundary condition can be given as,
(1.2) 
which is essentially a fully coupled forwardbackward stochastic differential equation (FBSDE in short). Many research work have studied the solutions of FBSDEs and the eigenvalue of the Hamiltonian systems
[3, 4, 5, 6, 7, 8, 9, 10, 11]. The significance of studying this kind of Hamiltonian system is that on the one hand it can be applied in solving the stochastic optimal control problems via the wellknown stochastic maximum principle [12, 13]; on the other hand, it helps to obtain the solutions of nonlinear partial differential equations (PDEs in short) according to the connection between the FBSDEs and the PDEs
[11].In most cases, it is difficult to obtain the explicit solution of the Hamiltonian system (1.2), thus numerical methods should be studied. As (1.2) is essentially a FBSDE, an intuitive way is to solve (1.2) from the perspective of FBSDEs. Therefore, numerical methods for solving the FBSDEs can be applied [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 8]
, such as the PDE methods, the probabilistic methods, etc. However, most of the traditional numerical methods can not deal with highdimensional problems. Moreover, it is worth to point out that solving the fully coupled FBSDEs numerically has been a much more challenging problem than the general FBSDEs, even for low dimensional cases. Recently, with the application of the deep learning technique in a wide range of areas, numerical methods based on deep neural networks have been proposed to solve highdimensional Backward Stochastic Differential Equations (BSDEs in short) and FBSDEs and achieved remarkable success. Among them, a breakthrough work was developed by
[24, 25], the main idea is to reformulate the BSDE into a stochastic optimal control problem by rewriting the backward process into a forward form and taking the terminal error as the cost functional, then the solution of the BSDE is approximated by deep neural network. Other different deep learning algorithms are proposed to solve the BSDEs and related PDEs [26, 27, 28, 29], where they also focus on approximating the solution of the BSDE (or PDE) with the deep neural network. For solving coupled and fully coupled FBSDEs [30, 31] developed numerical algorithms which are also inspired by [24, 25].In this paper, we propose a novel method to solve the Hamiltonian system (1.2) via deep learning. As equation (1.2) is at the same time a fully coupled FBSDE, this method is also suitable for solving fully coupled FBSDEs. However, different from the above mentioned deep learning methods which aim to solve the FBSDEs directly, we first look for the corresponding stochastic optimal control problem of the Hamiltonian system, such that the Hamiltonian system of the stochastic control problem is exactly what we need to solve. Then we approximate the optimal control with deep neural networks. In order to solve the optimal control problem, two different cases are considered which correspond to two different algorithms. The first algorithm (Algorithm 1) deals with the case where the function defined in (2.5) has an explicit form. For the case that cannot be expressed explicitly, the original control problem is transformed to a double objective optimization problem and we develop the second algorithm (Algorithm 2) to solve it. Finally, the numerical solutions of (1.2) are obtained by calculating the solution of the extended Hamiltonian system for the optimal control according to the stochastic maximum principle.
We also compare the results of our novel proposed algorithms with that of the algorithm mentioned as Algorithm 1 in our previous work [31] ( which is called the Deep FBSDE method here), which can be used to solve the Hamiltonian system from the view of the FBSDEs. Comparing with the Deep FBSDE method, our proposed algorithms have two advantages. The first advantage is that less numbers of iteration steps are required to achieve convergent results. When the Deep FBSDE method converges, it needs more iterations to achieve a convergent result. The second advantage is that our proposed algorithms have more stable convergences. For some Hamiltonian systems, the Deep FBSDE method is easier to diverge with the same piecewise decay learning rate as our two proposed algorithms. The details can be referred to the numerical results in section 4.
This paper is organized as follows. In section 2, we describe the Hamiltonian system that we aim to solve, and introduce its corresponding stochastic optimal control problem. In section 3, we introduce two schemes to solve the stochastic control problem according to whether the function defined in (2.5) has an explicit form, and then give the corresponding neural network architectures. The numerical results for different examples are shown in section 4, and a brief conclusion is made in section 5.
2 Problem formulation
In this section, we first describe a kind of stochastic Hamiltonian systems, then we show that solving this kind of stochastic Hamiltonian systems is equivalent to solving a stochastic optimal control problem.
2.1 The stochastic Hamiltonian system
Let ,
be a filtered probability space, in which
is a dimensional standard Brownian motion; is the natural filtration generated by the Brownian motion Suppose that is complete, contains all the null sets in and is right continuous.For , define and . The space of all mean squareintegrable adapted and valued processes will be denoted by , which is a Hilbert space with the norm
and
Let
(2.1) 
be a real function of , called a Hamiltonian and let
(2.2) 
be a real function of . In our context, unless otherwise stated, we always assume that the Hamiltonian is strictly convex with respect to and .
Consider the following stochastic Hamiltonian system:
(2.3) 
where , , are gradients of with respect to , , , respectively. It is worth to pointing out that the above system is essentially a special kind of FBSDEs.
Set
and
Definition 2.1.
Assumption 1.
For any and ,

there exists a constant , such that
and

there exists a constant , such that the following monotonic conditions hold.
Theorem 1 (Theorem 3.1 in [3]).
Recently, numerical algorithms for solving the BSDEs and FBSDEs with deep learning method [24, 25, 30, 31] have been proposed and demonstrated remarkable performance. The main idea is to reformulate the BSDE into a stochastic optimal control problem, where the solution of the BSDE is regarded as a control and approximated with deep neural network, and the terminal error is taken as the cost functional. Other different numerical algorithms have also been developed for solving the FBSDEs and the related PDEs [26, 27, 28, 29], which also regard the solution of the FBSDE ( or
) as a control and approximate it with appropriate loss function.
2.2 A novel method to solve the stochastic Hamiltonian system
As noted in the previous sections, the stochastic Hamiltonian system (2.3) is essentially a fully coupled FBSDE and can be solved through the methods for solving the FBSDEs. In this paper, we develop a novel method for solving the Hamiltonian system(2.3). Different from the above mentioned algorithms for solving the BSDEs or FBSDEs, our main idea is to find the corresponding stochastic optimal control problem of the stochastic Hamiltonian system, and then directly apply the deep learning method to solve the control problem.
, set
(2.4) 
and
(2.5) 
Here is the LegendreFenchel transform of with respect to . Due to the differentiability and strict convexity of the Hamiltonian , is also differential and strictly convex with respect to and [32].
Consider the following control system,
(2.6) 
and the cost functional is given as
(2.7) 
where the controls and belong to and , respectively. The set of all admissible controls is denoted by . Any satisfying
(2.8) 
is called an optimal control. The corresponding state trajectory is called an optimal trajectory and the corresponding triple is called an optimal triple.
In the following we prove that solving the stochastic Hamiltonian system (2.3) is equivalent to solving the stochastic optimal control problem (2.6)(2.7).
We need the following assumption.
Assumption 2.
is continuously differentiable with respect to , , , and
for some given .
Theorem 2.
Let be a given real function and strictly convex with respect to and . The derivatives of and satisfy Assumption 1; satisfies Assumption 2. Suppose that is the optimal triple of the optimal control problem (2.6)(2.7). Then uniquely solves the Hamiltonian system (2.3), where can be given as
(2.9) 
and can be solved by this following BSDE
(2.10) 
Proof.
Set
(2.11) 
Under our assumptions, for any optimal triple of the optimal control problem (2.6)(2.7), we have the following extended stochastic Hamiltonian system through the wellknown stochastic maximum principle (SMP in short) ( e.g. Theorem 4.1 in [12]),
(2.12) 
and
(2.13) 
The solution of the extended stochastic Hamiltonian system (2.12)(2.13) is a 5tuple .
By Theorem 12.2 in [32], we have the inverse LegendreFenchel transform of (2.5):
(2.14) 
Because is strictly concave in , it yields that the maximum point of (2.13) is uniquely determined by due to the implicit function existence theorem:
(2.15)  
and are differentiable functions. By (2.14),
(2.16) 
which leads to
(2.17)  
Thus, the derivatives of with respect to are
(2.18)  
It can be verified that
(2.19)  
which implies that solves the BSDE (2.10). Taking the conditional expectation in the backward equation of (2.10), we have (2.9) hold.
The following proposition can help us to construct our algorithms in the next section.
Proposition 3.
Proof.
By the definition of and the SMP,
(2.25)  
Then, we have
(2.26) 
The definition of shows that
(2.27)  
where is the minimum point.
In Theorem 2 and Proposition 3, we choose the stochastic optimal control problem (2.6)(2.7), whose diffusion term and drift term of the state equation are simple and . In fact, to simplify the linear terms of and in the Hamiltonian , we can also choose other forms of and , such as and , which are linear with respect to and . In this cases, the transformations (2.5) and (2.14) also hold. We show an example in the numerical result, in subsection 4.2.
Besides, we can still solve the Hamiltonian system (2.3) even if the coefficients do not satisfy the monotonic conditions (Assumption 1 (ii)) in Theorem 2 and Proposition 3. For example, the articles [8, 9] studied the solvability of FBSDEs under relatively loose conditions. In this situation, as long as the optimal controls reach the optimal values, the Hamiltonian system (2.3) can be solved, however the solution is not necessarily unique.
3 Numerical method for solving Hamiltonian systems
In Section 2, we present the idea of the stochastic optimal control method to solve the Hamiltonian system (2.3). According to Theorem 2, we only need to find the optimal control triple of the stochastic control problem (2.6) and (2.7), then the solution can be obtained by taking the conditional expectation on the backward SDE of (2.10). Therefore, effective approximation method should be used to obtain the optimal triple of (2.6)(2.7), especially for high dimensional cases.
Deep neural networks are usually used to approximate functions defined on finitedimensional space, and the approximation relies on the composition of layers with simple functions. On the basis of the universal approximation theorem [33, 34], the neural networks have shown to be an effective tool and gained great successes in numerous practical applications. In this paper, inspired by [35], we simulate the stochastic optimal control problem (2.6)(2.7) from a direct way with the deep neural network and develop two different numerical algorithms suitable for different cases.
Let be a partition of the time interval, of . Define and , where , for . We also denote
which is small enough. Then the EulerMaruyama scheme of the state equation (2.6) can be written as
(3.1) 
and the corresponding cost functional is given as
(3.2) 
where represents the number of Monte Carlo samples.
We introduce a feedforward neural network of the form
(3.3) 
where

is a positive integer specifying the depth of the neural network,

are functions of the form
the matrix weights
and bias vector
are trainable parameters, is the whole trainable parameters , andis the number of neurons at layer
, 
are the nonlinear activation functions, such as the sigmoid, the rectified linear unit (ReLU), the exponential linear unit (ELU), etc.
We approximate the controls with two different neural networks, which can be represented with (3.3) and denoted as and , respectively:
(3.4) 
The two neural networks have the same input dimension but different output dimensions. Figure 1 shows an example of the whole network architecture for . In this paper, we use the common parameters of the neural networks for all the time points, i.e. a single network is developed for simulating each of the control, and the time point is regarded as an input of the neural network.
3.1 Case 1: the function has an explicit form
When the function defined as (2.5) has an explicit form, then the discrete cost functional (3.2) can be approximated directly with
(3.5) 
which is also the loss function we need to minimize in the whole neural network, and are the outputs of the two neural networks at time . Both the neural networks approximating contain one dim input layer, three dim hidden layers. The network of has an dim output layer and that of has a dim output layer. In order to simplify the representation, here we use to represent the training parameters for both of the neural networks.
To minimize the loss function (3.5
) and learn the optimal parameters, some basic optimization algorithms, such as stochastic gradient descent (SGD), AdaGrad, RMSProp, and Adam which are already implemented in TensorFlow can be used. In this paper, the Adam method
[36] is adopted as the optimizer.
Comments
There are no comments yet.