Protecting President Zelenskyy against Deep Fakes

06/24/2022
by   Matyáš Boháček, et al.
4

The 2022 Russian invasion of Ukraine is being fought on two fronts: a brutal ground war and a duplicitous disinformation campaign designed to conceal and justify Russia's actions. This campaign includes at least one example of a deep-fake video purportedly showing Ukrainian President Zelenskyy admitting defeat and surrendering. In anticipation of future attacks of this form, we describe a facial and gestural behavioral model that captures distinctive characteristics of Zelenskyy's speaking style. Trained on over eight hours of authentic video from four different settings, we show that this behavioral model can distinguish Zelenskyy from deep-fake imposters.This model can play an important role – particularly during the fog of war – in distinguishing the real from the fake.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2021

Challenges and Solutions in DeepFakes

Deep learning has been successfully appertained to solve various complex...
research
08/06/2022

Study of detecting behavioral signatures within DeepFake videos

There is strong interest in the generation of synthetic video imagery of...
research
06/12/2023

Deepfake in the Metaverse: An Outlook Survey

We envision deepfake technologies, which synthesize realistic fake image...
research
06/23/2021

Deep Fake Detection: Survey of Facial Manipulation Detection Solutions

Deep Learning as a field has been successfully used to solve a plethora ...
research
04/29/2020

Detecting Deep-Fake Videos from Appearance and Behavior

Synthetically-generated audios and videos – so-called deep fakes – conti...
research
02/17/2020

Amplifying The Uncanny

Deep neural networks have become remarkably good at producing realistic ...
research
12/01/2022

Regularization with Fake Features

Recent successes of massively overparameterized models have inspired a n...

Please sign up or login with your details

Forgot password? Click here to reset