Study of detecting behavioral signatures within DeepFake videos

by   Qiaomu Miao, et al.

There is strong interest in the generation of synthetic video imagery of people talking for various purposes, including entertainment, communication, training, and advertisement. With the development of deep fake generation models, synthetic video imagery will soon be visually indistinguishable to the naked eye from a naturally capture video. In addition, many methods are continuing to improve to avoid more careful, forensic visual analysis. Some deep fake videos are produced through the use of facial puppetry, which directly controls the head and face of the synthetic image through the movements of the actor, allow the actor to 'puppet' the image of another. In this paper, we address the question of whether one person's movements can be distinguished from the original speaker by controlling the visual appearance of the speaker but transferring the behavior signals from another source. We conduct a study by comparing synthetic imagery that: 1) originates from a different person speaking a different utterance, 2) originates from the same person speaking a different utterance, and 3) originates from a different person speaking the same utterance. Our study shows that synthetic videos in all three cases are seen as less real and less engaging than the original source video. Our results indicate that there could be a behavioral signature that is detectable from a person's movements that is separate from their visual appearance, and that this behavioral signature could be used to distinguish a deep fake from a properly captured video.


page 1

page 4

page 5

page 6

page 7

page 8


Detecting Deep-Fake Videos from Appearance and Behavior

Synthetically-generated audios and videos – so-called deep fakes – conti...

Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head Videos

Modern generators render talking-head videos with impressive levels of p...

Protecting President Zelenskyy against Deep Fakes

The 2022 Russian invasion of Ukraine is being fought on two fronts: a br...

Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion

In today's era of digital misinformation, we are increasingly faced with...

Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking

Following the recent initiatives for the democratization of AI, deep fak...

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

Fake portrait video generation techniques have been posing a new threat ...

Actor and Observer: Joint Modeling of First and Third-Person Videos

Several theories in cognitive neuroscience suggest that when people inte...

Please sign up or login with your details

Forgot password? Click here to reset