Log In Sign Up

Task-Optimal Exploration in Linear Dynamical Systems

by   Andrew Wagenmaker, et al.

Exploration in unknown environments is a fundamental problem in reinforcement learning and control. In this work, we study task-guided exploration and determine what precisely an agent must learn about their environment in order to complete a particular task. Formally, we study a broad class of decision-making problems in the setting of linear dynamical systems, a class that includes the linear quadratic regulator problem. We provide instance- and task-dependent lower bounds which explicitly quantify the difficulty of completing a task of interest. Motivated by our lower bound, we propose a computationally efficient experiment-design based exploration algorithm. We show that it optimally explores the environment, collecting precisely the information needed to complete the task, and provide finite-time bounds guaranteeing that it achieves the instance- and task-optimal sample complexity, up to constant factors. Through several examples of the LQR problem, we show that performing task-guided exploration provably improves on exploration schemes which do not take into account the task of interest. Along the way, we establish that certainty equivalence decision making is instance- and task-optimal, and obtain the first algorithm for the linear quadratic regulator problem which is instance-optimal. We conclude with several experiments illustrating the effectiveness of our approach in practice.


page 1

page 2

page 3

page 4


ExTra: Transfer-guided Exploration

In this work we present a novel approach for transfer-guided exploration...

Active Learning for Identification of Linear Dynamical Systems

We propose an algorithm to actively estimate the parameters of a linear ...

Online greedy identification of linear dynamical systems

This work addresses the problem of exploration in an unknown environment...

Regret Analysis of Certainty Equivalence Policies in Continuous-Time Linear-Quadratic Systems

This work studies theoretical performance guarantees of a ubiquitous rei...

Thompson Sampling Achieves Õ(√(T)) Regret in Linear Quadratic Control

Thompson Sampling (TS) is an efficient method for decision-making under ...

Model-Based Task Transfer Learning

A model-based task transfer learning (MBTTL) method is presented. We con...

Explore the Context: Optimal Data Collection for Context-Conditional Dynamics Models

In this paper, we learn dynamics models for parametrized families of dyn...