DeepAI AI Chat
Log In Sign Up

The Atlas Benchmark: an Automated Evaluation Framework for Human Motion Prediction

by   Andrey Rudenko, et al.
NetEase, Inc

Human motion trajectory prediction, an essential task for autonomous systems in many domains, has been on the rise in recent years. With a multitude of new methods proposed by different communities, the lack of standardized benchmarks and objective comparisons is increasingly becoming a major limitation to assess progress and guide further research. Existing benchmarks are limited in their scope and flexibility to conduct relevant experiments and to account for contextual cues of agents and environments. In this paper we present Atlas, a benchmark to systematically evaluate human motion trajectory prediction algorithms in a unified framework. Atlas offers data preprocessing functions, hyperparameter optimization, comes with popular datasets and has the flexibility to setup and conduct underexplored yet relevant experiments to analyze a method's accuracy and robustness. In an example application of Atlas, we compare five popular model- and learning-based predictors and find that, when properly applied, early physics-based approaches are still remarkably competitive. Such results confirm the necessity of benchmarks like Atlas.


Human Motion Trajectory Prediction: A Survey

With growing numbers of intelligent systems in human environments, the a...

Uncovering the Missing Pattern: Unified Framework Towards Trajectory Imputation and Prediction

Trajectory prediction is a crucial undertaking in understanding entity m...

Physics Constrained Motion Prediction with Uncertainty Quantification

Predicting the motion of dynamic agents is a critical task for guarantee...

How would surround vehicles move? A Unified Framework for Maneuver Classification and Motion Prediction

Reliable prediction of surround vehicle motion is a critical requirement...

Adaptive Trajectory Prediction via Transferable GNN

Pedestrian trajectory prediction is an essential component in a wide ran...

DQI: A Guide to Benchmark Evaluation

A `state of the art' model A surpasses humans in a benchmark B, but fail...

OPEB: Open Physical Environment Benchmark for Artificial Intelligence

Artificial Intelligence methods to solve continuous- control tasks have ...