MagicProp: Diffusion-based Video Editing via Motion-aware Appearance Propagation

09/02/2023
by   Hanshu Yan, et al.
0

This paper addresses the issue of modifying the visual appearance of videos while preserving their motion. A novel framework, named MagicProp, is proposed, which disentangles the video editing process into two stages: appearance editing and motion-aware appearance propagation. In the first stage, MagicProp selects a single frame from the input video and applies image-editing techniques to modify the content and/or style of the frame. The flexibility of these techniques enables the editing of arbitrary regions within the frame. In the second stage, MagicProp employs the edited frame as an appearance reference and generates the remaining frames using an autoregressive rendering approach. To achieve this, a diffusion-based conditional generation model, called PropDPM, is developed, which synthesizes the target frame by conditioning on the reference appearance, the target motion, and its previous appearance. The autoregressive editing approach ensures temporal consistency in the resulting videos. Overall, MagicProp combines the flexibility of image-editing techniques with the superior temporal consistency of autoregressive modeling, enabling flexible editing of object types and aesthetic styles in arbitrary regions of input videos while maintaining good temporal consistency across frames. Extensive experiments in various video editing scenarios demonstrate the effectiveness of MagicProp.

READ FULL TEXT

page 1

page 5

page 8

page 10

research
08/18/2023

StableVideo: Text-driven Consistency-aware Diffusion Video Editing

Diffusion-based methods can generate realistic images and videos, but th...
research
01/30/2023

Shape-aware Text-driven Layered Video Editing

Temporal consistency is essential for video editing applications. Existi...
research
03/14/2023

Edit-A-Video: Single Video Editing with Object-Aware Consistency

Despite the fact that text-to-video (TTV) model has recently achieved re...
research
07/24/2023

Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields

Recently, the editing of neural radiance fields (NeRFs) has gained consi...
research
03/30/2023

PAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models

Image editing using diffusion models has witnessed extremely fast-paced ...
research
05/14/2021

Automatic Non-Linear Video Editing Transfer

We propose an automatic approach that extracts editing styles in a sourc...
research
05/22/2023

ControlVideo: Training-free Controllable Text-to-Video Generation

Text-driven diffusion models have unlocked unprecedented abilities in im...

Please sign up or login with your details

Forgot password? Click here to reset