Comparing Correspondences: Video Prediction with Correspondence-wise Losses

04/19/2021
by   Daniel Geng, et al.
0

Today's image prediction methods struggle to change the locations of objects in a scene, producing blurry images that average over the many positions they might occupy. In this paper, we propose a simple change to existing image similarity metrics that makes them more robust to positional errors: we match the images using optical flow, then measure the visual similarity of corresponding pixels. This change leads to crisper and more perceptually accurate predictions, and can be used with any image prediction network. We apply our method to predicting future frames of a video, where it obtains strong performance with simple, off-the-shelf architectures.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 14

research
05/30/2018

Novel Video Prediction for Large-scale Scene using Optical Flow

Making predictions of future frames is a critical challenge in autonomou...
research
11/09/2017

Predicting Scene Parsing and Motion Dynamics in the Future

The ability of predicting the future is important for intelligent system...
research
02/08/2022

Learning Optical Flow with Adaptive Graph Reasoning

Estimating per-pixel motion between video frames, known as optical flow,...
research
11/10/2017

Predicting Chroma from Luma in AV1

Chroma from luma (CfL) prediction is a new and promising chroma-only int...
research
02/02/2015

Learning the Matching Function

The matching function for the problem of stereo reconstruction or optica...
research
11/05/2019

High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

Predicting future video frames is extremely challenging, as there are ma...
research
01/29/2019

Visual Rhythm Prediction with Feature-Aligning Network

In this paper, we propose a data-driven visual rhythm prediction method,...

Please sign up or login with your details

Forgot password? Click here to reset