ICface: Interpretable and Controllable Face Reenactment Using GANs

04/03/2019
by   Soumya Tripathy, et al.
0

This paper presents a generic face animator that is able to control the pose and expressions of a given face image. The animation is driven by human interpretable control signals consisting of head pose angles and the Action Unit (AU) values. The control information can be obtained from multiple sources including external driving videos and manual controls. Due to the interpretable nature of the driving signal, one can easily mix the information between multiple sources (e.g. pose from one image and expression from another) and apply selective post-production editing. The proposed face animator is implemented as a two stage neural network model that is learned in self-supervised manner using a large video collection. The proposed Interpretable and Controllable face reenactment network (ICface) is compared to the state-of-the-art neural network based face animation techniques in multiple tasks. The results indicate that ICface produces better visual quality, while being more versatile than most of the comparison methods. The introduced model could provide a lightweight and easy to use tool for multitude of advanced image and video editing tasks.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 10

research
07/27/2018

X2Face: A network for controlling face generation by using images, audio, and pose codes

The objective of this paper is a neural network model that controls the ...
research
04/18/2023

POCE: Pose-Controllable Expression Editing

Facial expression editing has attracted increasing attention with the ad...
research
03/31/2020

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images

StyleGAN generates photorealistic portrait images of faces with eyes, te...
research
04/27/2023

Controllable One-Shot Face Video Synthesis With Semantic Aware Prior

The one-shot talking-head synthesis task aims to animate a source image ...
research
09/17/2022

Continuously Controllable Facial Expression Editing in Talking Face Videos

Recently audio-driven talking face video generation has attracted consid...
research
06/19/2023

Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with Instructions

Recent neural talking radiance field methods have shown great success in...
research
04/17/2019

Vid2Game: Controllable Characters Extracted from Real-World Videos

We are given a video of a person performing a certain activity, from whi...

Please sign up or login with your details

Forgot password? Click here to reset