Future-conditioned Unsupervised Pretraining for Decision Transformer

05/26/2023
by   Zhihui Xie, et al.
0

Recent research in offline reinforcement learning (RL) has demonstrated that return-conditioned supervised learning is a powerful paradigm for decision-making problems. While promising, return conditioning is limited to training data labeled with rewards and therefore faces challenges in learning from unsupervised data. In this work, we aim to utilize generalized future conditioning to enable efficient unsupervised pretraining from reward-free and sub-optimal offline data. We propose Pretrained Decision Transformer (PDT), a conceptually simple approach for unsupervised RL pretraining. PDT leverages future trajectory information as a privileged context to predict actions during training. The ability to make decisions based on both present and future factors enhances PDT's capability for generalization. Besides, this feature can be easily incorporated into a return-conditioned framework for online finetuning, by assigning return values to possible futures and sampling future embeddings based on their respective values. Empirically, PDT outperforms or performs on par with its supervised pretraining counterpart, especially when dealing with sub-optimal data. Further analysis reveals that PDT can extract diverse behaviors from offline data and controllably sample high-return behaviors by online finetuning. Code is available at here.

READ FULL TEXT

page 8

page 17

research
02/11/2021

Representation Matters: Offline Pretraining for Sequential Decision Making

The recent success of supervised learning methods on ever larger offline...
research
06/26/2023

Supervised Pretraining Can Learn In-Context Reinforcement Learning

Large transformer models trained on diverse datasets have shown a remark...
research
06/24/2023

Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets

Despite the recent advancements in offline reinforcement learning via su...
research
10/24/2022

Dichotomy of Control: Separating What You Can Control from What You Cannot

Future- or return-conditioned supervised learning is an emerging paradig...
research
05/26/2023

Emergent Agentic Transformer from Chain of Hindsight Experience

Large transformer models powered by diverse data and model scale have do...
research
09/12/2023

ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning

Decision Transformer (DT), which employs expressive sequence modeling te...
research
05/31/2022

You Can't Count on Luck: Why Decision Transformers Fail in Stochastic Environments

Recently, methods such as Decision Transformer that reduce reinforcement...

Please sign up or login with your details

Forgot password? Click here to reset