EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

05/26/2019
by   Mohamed Elgharib, et al.
3

Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
07/07/2021

Egocentric Videoconferencing

We introduce a method for egocentric videoconferencing that enables hand...
research
08/10/2021

FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face Animation

This paper presents a neural rendering method for controllable portrait ...
research
09/10/2010

Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

In this paper, Deterministic Cellular Automata (DCA) based video shot cl...
research
03/05/2021

Real-time RGBD-based Extended Body Pose Estimation

We present a system for real-time RGBD-based estimation of 3D human pose...
research
10/30/2022

The Florence 4D Facial Expression Dataset

Human facial expressions change dynamically, so their recognition / anal...
research
08/14/2023

SpecTracle: Wearable Facial Motion Tracking from Unobtrusive Peripheral Cameras

Facial motion tracking in head-mounted displays (HMD) has the potential ...
research
08/16/2018

Self-supervised CNN for Unconstrained 3D Facial Performance Capture from a Single RGB-D Camera

We present a novel method for real-time 3D facial performance capture wi...

Please sign up or login with your details

Forgot password? Click here to reset