Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing
Recently, DARPA launched the ShELL program, which aims to explore how experience sharing can benefit distributed lifelong learning agents in adapting to new challenges. In this paper, we address this issue by conducting both theoretical and empirical research on distributed multi-task reinforcement learning (RL), where a group of N agents collaboratively solves M tasks without prior knowledge of their identities. We approach the problem by formulating it as linearly parameterized contextual Markov decision processes (MDPs), where each task is represented by a context that specifies the transition dynamics and rewards. To tackle this problem, we propose an algorithm called DistMT-LSVI. First, the agents identify the tasks, and then they exchange information through a central server to derive ϵ-optimal policies for the tasks. Our research demonstrates that to achieve ϵ-optimal policies for all M tasks, a single agent using DistMT-LSVI needs to run a total number of episodes that is at most 𝒪̃(d^3H^6(ϵ^-2+c_ sep^-2)· M/N), where c_ sep>0 is a constant representing task separability, H is the horizon of each episode, and d is the feature dimension of the dynamics and rewards. Notably, DistMT-LSVI improves the sample complexity of non-distributed settings by a factor of 1/N, as each agent independently learns ϵ-optimal policies for all M tasks using 𝒪̃(d^3H^6Mϵ^-2) episodes. Additionally, we provide numerical experiments conducted on OpenAI Gym Atari environments that validate our theoretical findings.
READ FULL TEXT