DeepAI AI Chat
Log In Sign Up

COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning

by   Avi Singh, et al.
berkeley college

Reinforcement learning has been applied to a wide variety of robotics problems, but most of such applications involve collecting data from scratch for each new task. Since the amount of robot data we can collect for any single task is limited by time and cost considerations, the learned behavior is typically narrow: the policy can only execute the task in a handful of scenarios that it was trained on. What if there was a way to incorporate a large amount of prior data, either from previously solved tasks or from unsupervised or undirected environment interaction, to extend and generalize learned behaviors? While most prior work on extending robotic skills using pre-collected data focuses on building explicit hierarchies or skill decompositions, we show in this paper that we can reuse prior data to extend new skills simply through dynamic programming. We show that even when the prior data does not actually succeed at solving the new task, it can still be utilized for learning a better policy, by providing the agent with a broader understanding of the mechanics of its environment. We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task, with our hardest experimental setting involving composing four robotic skills in a row: picking, placing, drawer opening, and grasping, where a +1/0 sparse reward is provided only on task completion. We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands, and present results in both simulated and real world domains. Additional materials and source code can be found on our project website:


page 2

page 4

page 6

page 7

page 8


Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning

Reinforcement learning (RL) algorithms hold the promise of enabling auto...

Latent Plans for Task-Agnostic Offline Reinforcement Learning

Everyday tasks of long-horizon and comprising a sequence of multiple imp...

Skill-Critic: Refining Learned Skills for Reinforcement Learning

Hierarchical reinforcement learning (RL) can accelerate long-horizon dec...

A Workflow for Offline Model-Free Robotic Reinforcement Learning

Offline reinforcement learning (RL) enables learning control policies by...

End-to-End Deep Visual Control for Mastering Needle-Picking Skills With World Models and Behavior Cloning

Needle picking is a challenging surgical task in robot-assisted surgery ...

Polybot: Training One Policy Across Robots While Embracing Variability

Reusing large datasets is crucial to scale vision-based robotic manipula...