DeepAI AI Chat
Log In Sign Up

Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP

by   Amy Zhang, et al.

Multi-task reinforcement learning is a rich paradigm where information from previously seen environments can be leveraged for better performance and improved sample-efficiency in new environments. In this work, we leverage ideas of common structure underlying a family of Markov decision processes (MDPs) to improve performance in the few-shot regime. We use assumptions of structure from Hidden-Parameter MDPs and Block MDPs to propose a new framework, HiP-BMDP, and approach for learning a common representation and universal dynamics model. To this end, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work. To demonstrate the efficacy of the proposed method, we empirically compare and show improvements against other multi-task and meta-reinforcement learning baselines.


Sample Complexity of Multi-task Reinforcement Learning

Transferring knowledge across a sequence of reinforcement-learning tasks...

Provably Efficient Multi-Task Reinforcement Learning with Model Transfer

We study multi-task reinforcement learning (RL) in tabular episodic Mark...

Invariant Causal Prediction for Block MDPs

Generalization across environments is critical to the successful applica...

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...

Checklist Models for Improved Output Fluency in Piano Fingering Prediction

In this work we present a new approach for the task of predicting finger...

TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

Transferring knowledge among various environments is important to effici...

Efficient Multi-Task and Transfer Reinforcement Learning with Parameter-Compositional Framework

In this work, we investigate the potential of improving multi-task train...