LSTM-Based Facial Performance Capture Using Embedding Between Expressions

05/10/2018 ∙ by Hsien-Yu Meng, et al. ∙ 0

We present a novel end-to-end framework for facial performance capture given a monocular video of an actor's face. Our framework are comprised of 2 parts. First, to extract the information in the frames, we optimize a triplet loss to learn the embedding space which ensures the semantically closer facial expressions are closer in the embedding space and the model can be transferred to distinguish the expressions that are not presented in the training dataset. Second, the embeddings are fed into an LSTM network to learn the deformation between frames. In the experiments, we demonstrated that compared to other methods, our method can distinguish the delicate motion around lips and significantly reduce jitters between the tracked meshes.



There are no comments yet.


page 2

page 3

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.