Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles

09/18/2023
by   Noah Golowich, et al.
0

The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map ϕ(x, a) that maps state-action pairs to d-dimensional vectors, and that the rewards and transitions are linear functions in this representation. But where do these features come from? In the absence of expert domain knowledge, a tempting strategy is to use the “kitchen sink" approach and hope that the true features are included in a much larger set of potential features. In this paper we revisit linear MDPs from the perspective of feature selection. In a k-sparse linear MDP, there is an unknown subset S ⊂ [d] of size k containing all the relevant features, and the goal is to learn a near-optimal policy in only poly(k,log d) interactions with the environment. Our main result is the first polynomial-time algorithm for this problem. In contrast, earlier works either made prohibitively strong assumptions that obviated the need for exploration, or required solving computationally intractable optimization problems. Along the way we introduce the notion of an emulator: a succinct approximate representation of the transitions that suffices for computing certain Bellman backups. Since linear MDPs are a non-parametric model, it is not even obvious whether polynomial-sized emulators exist. We show that they do exist and can be computed efficiently via convex programming. As a corollary of our main result, we give an algorithm for learning a near-optimal policy in block MDPs whose decoding function is a low-depth decision tree; the algorithm runs in quasi-polynomial time and takes a polynomial number of samples. This can be seen as a reinforcement learning analogue of classic results in computational learning theory. Furthermore, it gives a natural model where improving the sample complexity via representation learning is computationally feasible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2022

Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning Approach

We present BRIEE (Block-structured Representation learning with Interlea...
research
04/21/2009

Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version

In this paper we propose an algorithm for polynomial-time reinforcement ...
research
01/26/2022

Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes

Reward-free reinforcement learning (RL) considers the setting where the ...
research
06/13/2022

Near-Optimal Sample Complexity Bounds for Constrained MDPs

In contrast to the advances in characterizing the sample complexity for ...
research
08/29/2019

Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity

In this paper, we settle the sampling complexity of solving discounted t...
research
02/11/2022

Computational-Statistical Gaps in Reinforcement Learning

Reinforcement learning with function approximation has recently achieved...
research
03/09/2017

Sample Efficient Feature Selection for Factored MDPs

In reinforcement learning, the state of the real world is often represen...

Please sign up or login with your details

Forgot password? Click here to reset