Multimedia Semantic Integrity Assessment Using Joint Embedding Of Images And Text

07/06/2017
by   Ayush Jaiswal, et al.
0

Real world multimedia data is often composed of multiple modalities such as an image or a video with associated text (e.g. captions, user comments, etc.) and metadata. Such multimodal data packages are prone to manipulations, where a subset of these modalities can be altered to misrepresent or repurpose data packages, with possible malicious intent. It is, therefore, important to develop methods to assess or verify the integrity of these multimedia packages. Using computer vision and natural language processing methods to directly compare the image (or video) and the associated caption to verify the integrity of a media package is only possible for a limited set of objects and scenes. In this paper, we present a novel deep learning-based approach for assessing the semantic integrity of multimedia packages containing images and captions, using a reference set of multimedia packages. We construct a joint embedding of images and captions with deep multimodal representation learning on the reference dataset in a framework that also provides image-caption consistency scores (ICCSs). The integrity of query media packages is assessed as the inlierness of the query ICCSs with respect to the reference dataset. We present the MultimodAl Information Manipulation dataset (MAIM), a new dataset of media packages from Flickr, which we make available to the research community. We use both the newly created dataset as well as Flickr30K and MS COCO datasets to quantitatively evaluate our proposed approach. The reference dataset does not contain unmanipulated versions of tampered query packages. Our method is able to achieve F1 scores of 0.75, 0.89 and 0.94 on MAIM, Flickr30K and MS COCO, respectively, for detecting semantically incoherent media packages.

READ FULL TEXT

page 1

page 2

page 4

page 5

research
11/23/2020

MEG: Multi-Evidence GNN for Multimodal Semantic Forensics

Fake news often involves semantic manipulations across modalities such a...
research
07/29/2022

ACM Multimedia Grand Challenge on Detecting Cheapfakes

Cheapfake is a recently coined term that encompasses non-AI (“cheap”) ma...
research
08/20/2018

Deep Multimodal Image-Repurposing Detection

Nefarious actors on social media and other platforms often spread rumors...
research
03/02/2019

AIRD: Adversarial Learning Framework for Image Repurposing Detection

Image repurposing is a commonly used method for spreading misinformation...
research
09/16/2022

TIMIT-TTS: a Text-to-Speech Dataset for Multimodal Synthetic Media Detection

With the rapid development of deep learning techniques, the generation a...
research
05/07/2019

Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings

In this paper we present a novel interactive multimodal learning system,...
research
06/20/2014

Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides multimedia scientists and practitioners with a clean and ...

Please sign up or login with your details

Forgot password? Click here to reset