Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting

01/24/2023
by   Kaidong Zhang, et al.
0

Transformers have been widely used for video processing owing to the multi-head self attention (MHSA) mechanism. However, the MHSA mechanism encounters an intrinsic difficulty for video inpainting, since the features associated with the corrupted regions are degraded and incur inaccurate self attention. This problem, termed query degradation, may be mitigated by first completing optical flows and then using the flows to guide the self attention, which was verified in our previous work - flow-guided transformer (FGT). We further exploit the flow guidance and propose FGT++ to pursue more effective and efficient video inpainting. First, we design a lightweight flow completion network by using local aggregation and edge loss. Second, to address the query degradation, we propose a flow guidance feature integration module, which uses the motion discrepancy to enhance the features, together with a flow-guided feature propagation module that warps the features according to the flows. Third, we decouple the transformer along the temporal and spatial dimensions, where flows are used to select the tokens through a temporally deformable MHSA mechanism, and global tokens are combined with the inner-window local tokens through a dual perspective MHSA mechanism. FGT++ is experimentally evaluated to be outperforming the existing video inpainting networks qualitatively and quantitatively.

READ FULL TEXT

page 1

page 3

page 10

page 11

page 12

research
08/14/2022

Flow-Guided Transformer for Video Inpainting

We propose a flow-guided transformer, which innovatively leverage the mo...
research
09/07/2023

ProPainter: Improving Propagation and Transformer for Video Inpainting

Flow-based propagation and spatiotemporal Transformer are two mainstream...
research
07/17/2023

Deficiency-Aware Masked Transformer for Video Inpainting

Recent video inpainting methods have made remarkable progress by utilizi...
research
09/28/2022

DeViT: Deformed Vision Transformers in Video Inpainting

This paper proposes a novel video inpainting method. We make three main ...
research
01/06/2022

Flow-Guided Sparse Transformer for Video Deblurring

Exploiting similar and sharper scene patches in spatio-temporal neighbor...
research
07/21/2022

Error Compensation Framework for Flow-Guided Video Inpainting

The key to video inpainting is to use correlation information from as ma...
research
11/21/2022

FlowLens: Seeing Beyond the FoV via Flow-guided Clip-Recurrent Transformer

Limited by hardware cost and system size, camera's Field-of-View (FoV) i...

Please sign up or login with your details

Forgot password? Click here to reset