Analysis of adversarial attacks against CNN-based image forgery detectors

08/25/2018
by   Diego Gragnaniello, et al.
0

With the ubiquitous diffusion of social networks, images are becoming a dominant and powerful communication channel. Not surprisingly, they are also increasingly subject to manipulations aimed at distorting information and spreading fake news. In recent years, the scientific community has devoted major efforts to contrast this menace, and many image forgery detectors have been proposed. Currently, due to the success of deep learning in many multimedia processing tasks, there is high interest towards CNN-based detectors, and early results are already very promising. Recent studies in computer vision, however, have shown CNNs to be highly vulnerable to adversarial attacks, small perturbations of the input data which drive the network towards erroneous classification. In this paper we analyze the vulnerability of CNN-based image forensics methods to adversarial attacks, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.

READ FULL TEXT
research
01/26/2021

The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

In recent years, convolutional neural networks (CNNs) have been widely u...
research
04/27/2020

Printing and Scanning Attack for Image Counter Forensics

Examining the authenticity of images has become increasingly important a...
research
06/17/2022

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

The proliferation of fake news and its serious negative social influence...
research
09/03/2023

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Malicious use of deepfakes leads to serious public concerns and reduces ...
research
04/27/2020

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...
research
06/25/2019

Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

There is substantial interest in the use of machine learning (ML) based ...
research
07/14/2023

On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks

Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as...

Please sign up or login with your details

Forgot password? Click here to reset