Instruct-Video2Avatar: Video-to-Avatar Generation with Instructions

06/05/2023
by   Shaoxu Li, et al.
0

We propose a method for synthesizing edited photo-realistic digital avatars with text instructions. Given a short monocular RGB video and text instructions, our method uses an image-conditioned diffusion model to edit one head image and uses the video stylization method to accomplish the editing of other head images. Through iterative training and update (three times or more), our method synthesizes edited photo-realistic animatable 3D neural head avatars with a deformable neural radiance field head synthesis method. In quantitative and qualitative studies on various subjects, our method outperforms state-of-the-art methods.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

research
07/18/2023

OPHAvatars: One-shot Photo-realistic Head Avatars

We propose a method for synthesizing photo-realistic digital avatars fro...
research
06/19/2023

Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with Instructions

Recent neural talking radiance field methods have shown great success in...
research
06/04/2019

Text-based Editing of Talking-head Video

Editing talking-head video to change the speech content or to remove fil...
research
11/22/2022

Instant Volumetric Head Avatars

We present Instant Volumetric Head Avatars (INSTA), a novel approach for...
research
06/01/2023

AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars

Capturing and editing full head performances enables the creation of vir...
research
08/01/2023

Context-Aware Talking-Head Video Editing

Talking-head video editing aims to efficiently insert, delete, and subst...
research
11/25/2022

Dynamic Neural Portraits

We present Dynamic Neural Portraits, a novel approach to the problem of ...

Please sign up or login with your details

Forgot password? Click here to reset