DeepAI AI Chat
Log In Sign Up

Protecting President Zelenskyy against Deep Fakes

by   Matyáš Boháček, et al.
berkeley college

The 2022 Russian invasion of Ukraine is being fought on two fronts: a brutal ground war and a duplicitous disinformation campaign designed to conceal and justify Russia's actions. This campaign includes at least one example of a deep-fake video purportedly showing Ukrainian President Zelenskyy admitting defeat and surrendering. In anticipation of future attacks of this form, we describe a facial and gestural behavioral model that captures distinctive characteristics of Zelenskyy's speaking style. Trained on over eight hours of authentic video from four different settings, we show that this behavioral model can distinguish Zelenskyy from deep-fake imposters.This model can play an important role – particularly during the fog of war – in distinguishing the real from the fake.


page 1

page 2

page 3

page 4


Challenges and Solutions in DeepFakes

Deep learning has been successfully appertained to solve various complex...

Study of detecting behavioral signatures within DeepFake videos

There is strong interest in the generation of synthetic video imagery of...

Deepfake in the Metaverse: An Outlook Survey

We envision deepfake technologies, which synthesize realistic fake image...

Deep Fake Detection: Survey of Facial Manipulation Detection Solutions

Deep Learning as a field has been successfully used to solve a plethora ...

Detecting Deep-Fake Videos from Appearance and Behavior

Synthetically-generated audios and videos – so-called deep fakes – conti...

Amplifying The Uncanny

Deep neural networks have become remarkably good at producing realistic ...

Regularization with Fake Features

Recent successes of massively overparameterized models have inspired a n...