On the Power of Multitask Representation Learning in Linear MDP

06/15/2021
by   Rui Lu, et al.
0

While multitask representation learning has become a popular approach in reinforcement learning (RL), theoretical understanding of why and when it works remains limited. This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP) under a generative model. In this paper, we consider an agent to learn a representation function ϕ out of a function class Φ from T source tasks with N data per task, and then use the learned ϕ̂ to reduce the required number of sample for a new task. We first discover a Least-Activated-Feature-Abundance (LAFA) criterion, denoted as κ, with which we prove that a straightforward least-square algorithm learns a policy which is Õ(H^2√(𝒞(Φ)^2 κ d/NT+κ d/n)) sub-optimal. Here H is the planning horizon, 𝒞(Φ) is Φ's complexity measure, d is the dimension of the representation (usually d≪𝒞(Φ)) and n is the number of samples for the new task. Thus the required n is O(κ d H^4) for the sub-optimality to be close to zero, which is much smaller than O(𝒞(Φ)^2κ d H^4) in the setting without multitask representation learning, whose sub-optimality gap is Õ(H^2√(κ𝒞(Φ)^2d/n)). This theoretically explains the power of multitask representation learning in reducing sample complexity. Further, we note that to ensure high sample efficiency, the LAFA criterion κ should be small. In fact, κ varies widely in magnitude depending on the different sampling distribution for new task. This indicates adaptive sampling technique is important to make κ solely depend on d. Finally, we provide empirical results of a noisy grid-world environment to corroborate our theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2022

Provable Benefit of Multitask Representation Learning in Reinforcement Learning

As representation learning becomes a powerful technique to reduce sample...
research
05/31/2022

Provable General Function Class Representation Learning in Multitask Bandits and MDPs

While multitask representation learning has become a popular approach in...
research
05/23/2015

The Benefit of Multitask Representation Learning

We discuss a general method to learn data representations from multiple ...
research
04/29/2020

Deep Reinforcement Learning with Graph-based State Representations

Deep RL approaches build much of their success on the ability of the dee...
research
05/17/2021

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting

Low-complexity models such as linear function representation play a pivo...
research
09/07/2022

Multitask Learning via Shared Features: Algorithms and Hardness

We investigate the computational efficiency of multitask learning of Boo...
research
11/09/2019

Learning Internal Representations

Most machine learning theory and practice is concerned with learning a s...

Please sign up or login with your details

Forgot password? Click here to reset