DeepAI AI Chat
Log In Sign Up

Towards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective

by   Yuejiang Liu, et al.

Learning behavioral patterns from observational data has been a de-facto approach to motion forecasting. Yet, the current paradigm suffers from two shortcomings: brittle under covariate shift and inefficient for knowledge transfer. In this work, we propose to address these challenges from a causal representation perspective. We first introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables, namely invariant mechanisms, style confounders, and spurious features. We then introduce a learning framework that treats each group separately: (i) unlike the common practice of merging datasets collected from different locations, we exploit their subtle distinctions by means of an invariance loss encouraging the model to suppress spurious correlations; (ii) we devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph; (iii) we introduce a style consistency loss that not only enforces the structure of style representations but also serves as a self-supervisory signal for test-time refinement on the fly. Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations, outperforming prior state-of-the-art motion forecasting models for out-of-distribution generalization and low-shot transfer.


page 1

page 2

page 3

page 4


Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion Forecasting

Deep motion forecasting models have achieved great success when trained ...

Invariant Causal Mechanisms through Distribution Matching

Learning representations that capture the underlying data generating pro...

Adversarial Causal Augmentation for Graph Covariate Shift

Out-of-distribution (OOD) generalization on graphs is drawing widespread...

Causal Forecasting:Generalization Bounds for Autoregressive Models

Despite the increasing relevance of forecasting methods, the causal impl...

Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

Adversarial contrastive learning (ACL), without requiring labels, incorp...

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Self-supervised representation learning has shown remarkable success in ...

Out of Distribution Generalization via Interventional Style Transfer in Single-Cell Microscopy

Real-world deployment of computer vision systems, including in the disco...