Thompson Sampling Efficiently Learns to Control Diffusion Processes

Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real parameter matrices from two settings of airplane and blood glucose control. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain optimality manifold that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2022

Regret Analysis of Certainty Equivalence Policies in Continuous-Time Linear-Quadratic Systems

This work studies theoretical performance guarantees of a ubiquitous rei...
research
09/16/2021

Adaptive Control of Quadratic Costs in Linear Stochastic Differential Equations

We study a canonical problem in adaptive control; design and analysis of...
research
09/20/2019

Nonparametric learning for impulse control problems

One of the fundamental assumptions in stochastic control of continuous t...
research
02/18/2021

Distributed Algorithms for Linearly-Solvable Optimal Control in Networked Multi-Agent Systems

Distributed algorithms for both discrete-time and continuous-time linear...
research
04/23/2021

Learning to reflect: A unifying approach for data-driven stochastic control strategies

Stochastic optimal control problems have a long tradition in applied pro...
research
06/28/2018

On Optimality of Adaptive Linear-Quadratic Regulators

Adaptive regulation of linear systems represents a canonical problem in ...
research
01/25/2021

Diffusion Asymptotics for Sequential Experiments

We propose a new diffusion-asymptotic analysis for sequentially randomiz...

Please sign up or login with your details

Forgot password? Click here to reset