PROVES: Establishing Image Provenance using Semantic Signatures

10/21/2021
by   Mingyang Xie, et al.
0

Modern AI tools, such as generative adversarial networks, have transformed our ability to create and modify visual data with photorealistic results. However, one of the deleterious side-effects of these advances is the emergence of nefarious uses in manipulating information in visual data, such as through the use of deep fakes. We propose a novel architecture for preserving the provenance of semantic information in images to make them less susceptible to deep fake attacks. Our architecture includes semantic signing and verification steps. We apply this architecture to verifying two types of semantic information: individual identities (faces) and whether the photo was taken indoors or outdoors. Verification accounts for a collection of common image transformation, such as translation, scaling, cropping, and small rotations, and rejects adversarial transformations, such as adversarially perturbed or, in the case of face verification, swapped faces. Experiments demonstrate that in the case of provenance of faces in an image, our approach is robust to black-box adversarial transformations (which are rejected) as well as benign transformations (which are accepted), with few false negatives and false positives. Background verification, on the other hand, is susceptible to black-box adversarial examples, but becomes significantly more robust after adversarial training.

READ FULL TEXT
research
06/12/2020

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

Deepfake represents a category of face-swapping attacks that leverage ma...
research
01/19/2021

Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization

Fooling deep neural networks (DNNs) with the black-box optimization has ...
research
06/23/2018

On Adversarial Examples for Character-Level Neural Machine Translation

Evaluating on adversarial examples has become a standard procedure to me...
research
06/21/2019

Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations

Recent years have seen fast development in synthesizing realistic human ...
research
07/22/2018

SiGAN: Siamese Generative Adversarial Network for Identity-Preserving Face Hallucination

Despite generative adversarial networks (GANs) can hallucinate photo-rea...
research
06/05/2020

Robust Face Verification via Disentangled Representations

We introduce a robust algorithm for face verification, i.e., deciding wh...

Please sign up or login with your details

Forgot password? Click here to reset