Episodic Linear Quadratic Regulators with Low-rank Transitions

by   Tianyu Wang, et al.

Linear Quadratic Regulators (LQR) achieve enormous successful real-world applications. Very recently, people have been focusing on efficient learning algorithms for LQRs when their dynamics are unknown. Existing results effectively learn to control the unknown system using number of episodes depending polynomially on the system parameters, including the ambient dimension of the states. These traditional approaches, however, become inefficient in common scenarios, e.g., when the states are high-resolution images. In this paper, we propose an algorithm that utilizes the intrinsic system low-rank structure for efficient learning. For problems of rank-m, our algorithm achieves a K-episode regret bound of order O(m^3/2 K^1/2). Consequently, the sample complexity of our algorithm only depends on the rank, m, rather than the ambient dimension, d, which can be orders-of-magnitude larger.


page 1

page 2

page 3

page 4


Learning Linear-Quadratic Regulators Efficiently with only √(T) Regret

We present the first computationally-efficient algorithm with O(√(T)) r...

Incremental Truncated LSTD

Balancing between computational efficiency and sample efficiency is an i...

Low-Rank Generalized Linear Bandit Problems

In a low-rank linear bandit problem, the reward of an action (represente...

Overcoming the Long Horizon Barrier for Sample-Efficient Reinforcement Learning with Latent Low-Rank Structure

The practicality of reinforcement learning algorithms has been limited d...

Optimal Gradient-based Algorithms for Non-concave Bandit Optimization

Bandit problems with linear or concave reward have been extensively stud...

Kernel approximation on algebraic varieties

Low-rank approximation of kernels is a fundamental mathematical problem ...