Two-Timescale Stochastic Gradient Descent in Continuous Time with Applications to Joint Online Parameter Estimation and Optimal Sensor Placement

by   Louis Sharrock, et al.

In this paper, we establish the almost sure convergence of two-timescale stochastic gradient descent algorithms in continuous time under general noise and stability conditions, extending well known results in discrete time. We analyse algorithms with both additive noise, and those with non-additive noise. In the non-additive case, our analysis is carried out under the assumption that the noise is a continuous-time Markov process, controlled by the algorithm states. The algorithms that we consider can be used to solve a broad class of unconstrained bilevel optimisation problems, which involve the joint optimisation of two interdependent objective functions. We study one such problem in detail, namely, the problem of joint online parameter estimation and optimal sensor placement for a continuous-time hidden Markov model. We demonstrate rigorously how this problem can be formulated as a bilevel optimisation problem, and propose a solution in the form of a two-timescale, stochastic gradient descent algorithm in continuous time. Furthermore, under suitable conditions on the process consisting of the latent signal, the filter, and the filter derivatives, we establish almost sure convergence of the online parameter estimates and the optimal sensor placements to the stationary points of the asymptotic log-likelihood and the asymptotic filter covariance, respectively. We also provide two numerical examples, illustrating the application of the proposed methodology to a partially observed one-dimensional Beneš equation, and a partially observed stochastic advection-diffusion equation.


page 1

page 2

page 3

page 4


Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes

We revisit the problem of estimating the parameters of a partially obser...

Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

Stochastic gradient descent in continuous time (SGDCT) provides a comput...

Gradient flows and randomised thresholding: sparse inversion and classification

Sparse inversion and classification problems are ubiquitous in modern da...

Parameter Estimation for the McKean-Vlasov Stochastic Differential Equation

In this paper, we consider the problem of parameter estimation for a sto...

Privacy Risk for anisotropic Langevin dynamics using relative entropy bounds

The privacy preserving properties of Langevin dynamics with additive iso...

On One-Bit Quantization

We consider the one-bit quantizer that minimizes the mean squared error ...

Please sign up or login with your details

Forgot password? Click here to reset