UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing

08/12/2021
by   Meng Cao, et al.
7

Recent research has witnessed advances in facial image editing tasks including face swapping and face reenactment. However, these methods are confined to dealing with one specific task at a time. In addition, for video facial editing, previous methods either simply apply transformations frame by frame or utilize multiple frames in a concatenated or iterative fashion, which leads to noticeable visual flickers. In this paper, we propose a unified temporally consistent facial video editing framework termed UniFaceGAN. Based on a 3D reconstruction model and a simple yet efficient dynamic training sample selection mechanism, our framework is designed to handle face swapping and face reenactment simultaneously. To enforce the temporal consistency, a novel 3D temporal loss constraint is introduced based on the barycentric coordinate interpolation. Besides, we propose a region-aware conditional normalization layer to replace the traditional AdaIN or SPADE to synthesize more context-harmonious results. Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9

research
07/03/2020

Task-agnostic Temporally Consistent Facial Video Editing

Recent research has witnessed the advances in facial image editing tasks...
research
08/11/2014

Video Face Editing Using Temporal-Spatial-Smooth Warping

Editing faces in videos is a popular yet challenging aspect of computer ...
research
12/06/2022

Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding

Inspired by the impressive performance of recent face image editing meth...
research
05/05/2022

Parametric Reshaping of Portraits in Videos

Sharing short personalized videos to various social media networks has b...
research
08/11/2023

RIGID: Recurrent GAN Inversion and Editing of Real Face Videos

GAN inversion is indispensable for applying the powerful editability of ...
research
03/28/2023

VIVE3D: Viewpoint-Independent Video Editing using 3D-Aware GANs

We introduce VIVE3D, a novel approach that extends the capabilities of i...
research
05/31/2023

Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor

Recent years have witnessed considerable achievements in editing images ...

Please sign up or login with your details

Forgot password? Click here to reset