Naive Exploration is Optimal for Online LQR

01/27/2020 ∙ by Max Simchowitz, et al. ∙ 0

We consider the problem of online adaptive control of the linear quadratic regulator, where the true system parameters are unknown. We prove new upper and lower bounds demonstrating that the optimal regret scales as Θ(√(d_u^2 d_x T)), where T is the number of time steps, d_u is the dimension of the input space, and d_x is the dimension of the system state. Notably, our lower bounds rule out the possibility of a poly(logT)-regret algorithm, which has been conjectured due to the apparent strong convexity of the problem. Our upper bounds are attained by a simple variant of certainty equivalence control, where the learner selects control inputs according to the optimal controller for their estimate of the system while injecting exploratory random noise. While this approach was shown to achieve √(T)-regret by Mania et al. 2019, we show that if the learner continually refines their estimates of the system matrices, the method attains optimal dimension dependence as well. Central to our upper and lower bounds is a new approach for controlling perturbations of Ricatti equations, which we call the self-bounding ODE method. The approach enables regret upper bounds which hold for any stabilizable instance, require no foreknowledge of the system except for a single stabilizing controller, and scale with natural control-theoretic quantities.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.