Flow-Guided Video Inpainting with Scene Templates

08/29/2021
by   Dong Lao, et al.
4

We consider the problem of filling in missing spatio-temporal regions of a video. We provide a novel flow-based solution by introducing a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images. We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings. This ensures consistency of the frame-to-frame flows generated to the underlying scene, reducing geometric distortions in flow based inpainting. The template is mapped to the missing regions in the video by a new L2-L1 interpolation scheme, creating crisp inpaintings and reducing common blur and distortion artifacts. We show on two benchmark datasets that our approach out-performs state-of-the-art quantitatively and in user studies.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 8

research
05/08/2019

Deep Flow-Guided Video Inpainting

Video inpainting, which aims at filling in missing regions of a video, r...
research
10/14/2021

RGB-D Image Inpainting Using Generative Adversarial Network with a Late Fusion Approach

Diminished reality is a technology that aims to remove objects from vide...
research
04/08/2021

Progressive Temporal Feature Alignment Network for Video Inpainting

Video inpainting aims to fill spatio-temporal "corrupted" regions with p...
research
09/17/2019

An Internal Learning Approach to Video Inpainting

We propose a novel video inpainting algorithm that simultaneously halluc...
research
01/25/2023

Efficient Flow-Guided Multi-frame De-fencing

Taking photographs ”in-the-wild” is often hindered by fence obstructions...
research
07/30/2023

Triple Correlations-Guided Label Supplementation for Unbiased Video Scene Graph Generation

Video-based scene graph generation (VidSGG) is an approach that aims to ...

Please sign up or login with your details

Forgot password? Click here to reset