Online Linear Quadratic Control

06/19/2018
by   Alon Cohen, et al.
0

We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses. We present the first efficient online learning algorithms in this setting that guarantee O(√(T)) regret under mild assumptions, where T is the time horizon. Our algorithms rely on a novel SDP relaxation for the steady-state distribution of the system. Crucially, and in contrast to previously proposed relaxations, the feasible solutions of our SDP all correspond to "strongly stable" policies that mix exponentially fast to a steady state.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2021

Regret Analysis of Distributed Online LQR Control for Unknown LTI Systems

Online learning has recently opened avenues for rethinking classical opt...
research
06/30/2021

Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning

Most modern reinforcement learning algorithms optimize a cumulative sing...
research
09/29/2020

Distributed Online Linear Quadratic Control for Linear Time-invariant Systems

Classical linear quadratic (LQ) control centers around linear time-invar...
research
09/29/2021

Minimal Expected Regret in Linear Quadratic Control

We consider the problem of online learning in Linear Quadratic Control s...
research
12/03/2020

Online learning with dynamics: A minimax perspective

We study the problem of online learning with dynamics, where a learner i...
research
02/21/2017

Fast rates for online learning in Linearly Solvable Markov Decision Processes

We study the problem of online learning in a class of Markov decision pr...
research
12/12/2020

Generating Adversarial Disturbances for Controller Verification

We consider the problem of generating maximally adversarial disturbances...

Please sign up or login with your details

Forgot password? Click here to reset