Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders

08/19/2023
by   Jie Cheng, et al.
0

This study explores the application of self-supervised learning (SSL) to the task of motion forecasting, an area that has not yet been extensively investigated despite the widespread success of SSL in computer vision and natural language processing. To address this gap, we introduce Forecast-MAE, an extension of the mask autoencoders framework that is specifically designed for self-supervised learning of the motion forecasting task. Our approach includes a novel masking strategy that leverages the strong interconnections between agents' trajectories and road networks, involving complementary masking of agents' future or history trajectories and random masking of lane segments. Our experiments on the challenging Argoverse 2 motion forecasting benchmark show that Forecast-MAE, which utilizes standard Transformer blocks with minimal inductive bias, achieves competitive performance compared to state-of-the-art methods that rely on supervised learning and sophisticated designs. Moreover, it outperforms the previous self-supervised learning method by a significant margin. Code is available at https://github.com/jchengai/forecast-mae.

READ FULL TEXT

page 4

page 14

page 15

research
10/04/2022

MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting

Large-scale self-supervised pre-training Transformer architecture have s...
research
06/19/2023

Road Barlow Twins: Redundancy Reduction for Road Environment Descriptors and Motion Prediction

Anticipating the future motion of traffic agents is vital for self-drivi...
research
06/28/2022

SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous Driving

Self-supervised learning (SSL) is an emerging technique that has been su...
research
06/15/2022

Forecasting of depth and ego-motion with transformers and self-supervision

This paper addresses the problem of end-to-end self-supervised forecasti...
research
10/15/2022

How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

Masked Autoencoders (MAE) based on a reconstruction task have risen to b...
research
11/23/2021

DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning

Self-supervised learning algorithms, including BERT and SimCLR, have ena...
research
02/08/2022

How to Understand Masked Autoencoders

"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes ...

Please sign up or login with your details

Forgot password? Click here to reset