FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and Countering Deepfakes

04/05/2022
by   Paarth Neekhara, et al.
3

Deepfakes and manipulated media are becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques. There have been several attempts at combating Deepfakes using machine learning classifiers. However, such classifiers do not generalize well to black-box image synthesis techniques and have been shown to be vulnerable to adversarial examples. To address these challenges, we introduce a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels. Instead of identifying and detecting fake media using visual artifacts, we propose to proactively embed a semi-fragile watermark into a real image so that we can prove its authenticity when needed. Our watermarking framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations such as image compression, scaling, saturation, contrast adjustments etc. This allows images shared over the internet to retain the verifiable watermark as long as face-swapping or any other Deepfake modification technique is not applied. We demonstrate that FaceSigns can embed a 128 bit secret as an imperceptible image watermark that can be recovered with a high bit recovery accuracy at several compression levels, while being non-recoverable when unseen Deepfake manipulations are applied. For a set of unseen benign and Deepfake manipulations studied in our work, FaceSigns can reliably detect manipulated content with an AUC score of 0.996 which is significantly higher than prior image watermarking and steganography techniques.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 11

research
01/22/2020

AMP: Authentication of Media via Provenance

Advances in graphics and machine learning have led to the general availa...
research
04/01/2020

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

It is now possible to synthesize highly realistic images of people who d...
research
03/11/2018

Detecting Adversarial Examples via Neural Fingerprinting

Deep neural networks are vulnerable to adversarial examples, which drama...
research
09/03/2019

Robust Invisible Video Watermarking with Attention

The goal of video watermarking is to embed a message within a video file...
research
10/03/2021

A New Approach for Image Authentication Framework for Media Forensics Purpose

With the increasing widely spread digital media become using in most fie...
research
09/21/2020

DeepTag: Robust Image Tagging for DeepFake Provenance

In recent years, DeepFake is becoming a common threat to our society, du...
research
02/25/2023

Why Do Deepfake Detectors Fail?

Recent rapid advancements in deepfake technology have allowed the creati...

Please sign up or login with your details

Forgot password? Click here to reset