Imposing Robust Structured Control Constraint on Reinforcement Learning of Linear Quadratic Regulator
This paper discusses learning a structured feedback control to obtain sufficient robustness to exogenous inputs for linear dynamic systems with unknown state matrix. The structural constraint on the controller is necessary for many cyber-physical systems, and our approach presents a design for any generic structure, paving the way for distributed learning control. The ideas from reinforcement learning (RL) in conjunction with control-theoretic sufficient stability and performance guarantees are used to develop the methodology. First, a model-based framework is formulated using dynamic programming to embed the structural constraint in the linear quadratic regulator (LQR) setting along with sufficient robustness conditions. Thereafter, we translate these conditions to a data-driven learning-based framework - robust structured reinforcement learning (RSRL) that enjoys the control-theoretic guarantees on stability and convergence. We validate our theoretical results with a simulation on a multi-agent network with 6 agents.
READ FULL TEXT