Blendshapes GHUM: Real-time Monocular Facial Blendshape Prediction

09/11/2023
by   Ivan Grishchenko, et al.
0

We present Blendshapes GHUM, an on-device ML pipeline that predicts 52 facial blendshape coefficients at 30+ FPS on modern mobile phones, from a single monocular RGB image and enables facial motion capture applications like virtual avatars. Our main contributions are: i) an annotation-free offline method for obtaining blendshape coefficients from real-world human scans, ii) a lightweight real-time model that predicts blendshape coefficients based on facial landmarks.

READ FULL TEXT

page 1

page 2

page 3

research
06/19/2020

Real-time Pupil Tracking from Monocular Video for Digital Puppetry

We present a simple, real-time approach for pupil tracking from live vid...
research
06/23/2022

BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation

We present BlazePose GHUM Holistic, a lightweight neural network pipelin...
research
04/06/2023

4D Agnostic Real-Time Facial Animation Pipeline for Desktop Scenarios

We present a high-precision real-time facial animation pipeline suitable...
research
09/02/2020

Real-time 3D Facial Tracking via Cascaded Compositional Learning

We propose to learn a cascade of globally-optimized modular boosted fern...
research
08/04/2020

Real-Time Cleaning and Refinement of Facial Animation Signals

With the increasing demand for real-time animated 3D content in the ente...
research
06/19/2020

Attention Mesh: High-fidelity Face Mesh Prediction in Real-time

We present Attention Mesh, a lightweight architecture for 3D face mesh p...
research
07/15/2019

Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs

We present an end-to-end neural network-based model for inferring an app...

Please sign up or login with your details

Forgot password? Click here to reset