MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction

12/09/2022
by   Shuliang Ning, et al.
0

The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner. This way often leads to severe performance degradation when they try to extrapolate a longer period of future, thus limiting the practical use of the prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation. However, only a few MIMO models for video prediction are proposed and they only achieve inferior performance due to the date. The real strength of the MIMO model in this area is not well noticed and is largely under-explored. Motivated by that, we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple MIMO architecture can go. Surprisingly, our empirical studies reveal that a simple MIMO model can outperform the state-of-the-art work with a large margin much more than expected, especially in dealing with longterm error accumulation. After exploring a number of ways and designs, we propose a new MIMO architecture based on extending the pure Transformer with local spatio-temporal blocks and a new multi-output decoder, namely MIMO-VP, to establish a new standard in video prediction. We evaluate our model in four highly competitive benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments show that our model wins 1st place on all the benchmarks with remarkable performance gains and surpasses the best SISO model in all aspects including efficiency, quantity, and quality. We believe our model can serve as a new baseline to facilitate the future research of video prediction tasks. The code will be released.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
03/02/2023

STDepthFormer: Predicting Spatio-temporal Depth from Video with a Self-supervised Transformer Model

In this paper, a self-supervised model that simultaneously predicts a se...
research
07/22/2023

MIMONet: Multi-Input Multi-Output On-Device Deep Learning

Future intelligent robots are expected to process multiple inputs simult...
research
03/29/2022

VPTR: Efficient Transformers for Video Prediction

In this paper, we propose a new Transformer block for video future frame...
research
10/07/2022

Time-Space Transformers for Video Panoptic Segmentation

We propose a novel solution for the task of video panoptic segmentation,...
research
03/17/2022

Video Prediction at Multiple Scales with Hierarchical Recurrent Networks

Autonomous systems not only need to understand their current environment...
research
10/27/2021

Comprehensive learning particle swarm optimization enabled modeling framework for multi-step-ahead influenza prediction

Epidemics of influenza are major public health concerns. Since influenza...
research
07/21/2021

From Single to Multiple: Leveraging Multi-level Prediction Spaces for Video Forecasting

Despite video forecasting has been a widely explored topic in recent yea...

Please sign up or login with your details

Forgot password? Click here to reset