ProPainter: Improving Propagation and Transformer for Video Inpainting

09/07/2023
by   Shangchen Zhou, et al.
0

Flow-based propagation and spatiotemporal Transformer are two mainstream mechanisms in video inpainting (VI). Despite the effectiveness of these components, they still suffer from some limitations that affect their performance. Previous propagation-based approaches are performed separately either in the image or feature domain. Global image propagation isolated from learning may cause spatial misalignment due to inaccurate optical flow. Moreover, memory or computational constraints limit the temporal range of feature propagation and video Transformer, preventing exploration of correspondence information from distant frames. To address these issues, we propose an improved framework, called ProPainter, which involves enhanced ProPagation and an efficient Transformer. Specifically, we introduce dual-domain propagation that combines the advantages of image and feature warping, exploiting global correspondences reliably. We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens. With these components, ProPainter outperforms prior arts by a large margin of 1.46 dB in PSNR while maintaining appealing efficiency.

READ FULL TEXT

page 1

page 7

page 8

page 12

page 14

page 15

page 16

page 17

research
01/24/2023

Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting

Transformers have been widely used for video processing owing to the mul...
research
04/08/2021

Progressive Temporal Feature Alignment Network for Video Inpainting

Video inpainting aims to fill spatio-temporal "corrupted" regions with p...
research
07/17/2023

Deficiency-Aware Masked Transformer for Video Inpainting

Recent video inpainting methods have made remarkable progress by utilizi...
research
08/14/2022

Flow-Guided Transformer for Video Inpainting

We propose a flow-guided transformer, which innovatively leverage the mo...
research
07/21/2022

Error Compensation Framework for Flow-Guided Video Inpainting

The key to video inpainting is to use correlation information from as ma...
research
04/14/2021

Decoupled Spatial-Temporal Transformer for Video Inpainting

Video inpainting aims to fill the given spatiotemporal holes with realis...
research
05/19/2022

Towards Unified Keyframe Propagation Models

Many video editing tasks such as rotoscoping or object removal require t...

Please sign up or login with your details

Forgot password? Click here to reset