Linear convergence of a policy gradient method for finite horizon continuous time stochastic control problems

by   Christoph Reisinger, et al.

Despite its popularity in the reinforcement learning community, a provably convergent policy gradient method for general continuous space-time stochastic control problems has been elusive. This paper closes the gap by proposing a proximal gradient algorithm for feedback controls of finite-time horizon stochastic control problems. The state dynamics are continuous time nonlinear diffusions with controlled drift and possibly degenerate noise, and the objectives are nonconvex in the state and nonsmooth in the control. We prove under suitable conditions that the algorithm converges linearly to a stationary point of the control problem, and is stable with respect to policy updates by approximate gradient steps. The convergence result justifies the recent reinforcement learning heuristics that adding entropy regularization to the optimization objective accelerates the convergence of policy gradient methods. The proof exploits careful regularity estimates of backward stochastic differential equations.


page 1

page 2

page 3

page 4


A Policy Gradient Framework for Stochastic Optimal Control Problems with Global Convergence Guarantee

In this work, we consider the stochastic optimal control problem in cont...

Neural ODEs as Feedback Policies for Nonlinear Optimal Control

Neural ordinary differential equations (Neural ODEs) model continuous ti...

Convergence of policy gradient methods for finite-horizon stochastic linear-quadratic control problems

We study the global linear convergence of policy gradient (PG) methods f...

Reinforcement learning for linear-convex models with jumps via stability analysis of feedback controls

We study finite-time horizon continuous-time linear-convex reinforcement...

Convergence of a robust deep FBSDE method for stochastic control

In this paper we propose a deep learning based numerical scheme for stro...

Numerics for Stochastic Distributed Parameter Control Systems: a Finite Transposition Method

In this chapter, we present some recent progresses on the numerics for s...

Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies

Policy gradient (PG) methods are a widely used reinforcement learning me...

Please sign up or login with your details

Forgot password? Click here to reset