
Regretoptimal measurementfeedback control
We consider measurementfeedback control in linear dynamical systems fro...
read it

Overfitting and Optimization in Offline Policy Learning
We consider the task of policy learning from an offline dataset generate...
read it

Fast Rates for the Regret of Offline Reinforcement Learning
We study the regret of reinforcement learning from offline data generate...
read it

Adversarial Policies in Learning Systems with Malicious Experts
We consider a learning system based on the conventional multiplicative w...
read it

Making NonStochastic Control (Almost) as Easy as Stochastic
Recent literature has made much progress in understanding online LQR: a ...
read it

Online Optimal Control with Affine Constraints
This paper considers online optimal control with affine constraints on t...
read it

On Uninformative Optimal Policies in Adaptive LQR with Unknown BMatrix
This paper presents local asymptotic minimax regret lower bounds for ada...
read it
The Power of Linear Controllers in LQR Control
The Linear Quadratic Regulator (LQR) framework considers the problem of regulating a linear dynamical system perturbed by environmental noise. We compute the policy regret between three distinct control policies: i) the optimal online policy, whose linear structure is given by the Ricatti equations; ii) the optimal offline linear policy, which is the best linear state feedback policy given the noise sequence; and iii) the optimal offline policy, which selects the globally optimal control actions given the noise sequence. We fully characterize the optimal offline policy and show that it has a recursive form in terms of the optimal online policy and future disturbances. We also show that cost of the optimal offline linear policy converges to the cost of the optimal online policy as the time horizon grows large, and consequently the optimal offline linear policy incurs linear regret relative to the optimal offline policy, even in the optimistic setting where the noise is drawn i.i.d from a known distribution. Although we focus on the setting where the noise is stochastic, our results also imply new lower bounds on the policy regret achievable when the noise is chosen by an adaptive adversary.
READ FULL TEXT
Comments
There are no comments yet.