Printing and Scanning Attack for Image Counter Forensics

04/27/2020
by   Hailey James, et al.
0

Examining the authenticity of images has become increasingly important as manipulation tools become more accessible and advanced. Recent work has shown that while CNN-based image manipulation detectors can successfully identify manipulations, they are also vulnerable to adversarial attacks, ranging from simple double JPEG compression to advanced pixel-based perturbation. Rephrase(hailey): In this paper we explore another method of highly plausible attack: printing and scanning. We demonstrate the vulnerability of two state-of-the-art models to this type of attack. We also propose a new machine learning model that performs comparably to these state-of-the-art models when trained and validated on printed and scanned images. Of the three models, our proposed model performs the best when trained and validated on images from a single printer. To facilitate this exploration, we create a dataset of over 6,000 printed and scanned image blocks. Further analysis suggests that variation between images produced from different printers is significant, large enough that good validation accuracy on images from one printer does not imply similar validation accuracy on images from a different printer.

READ FULL TEXT

page 2

page 3

page 8

research
08/25/2018

Analysis of adversarial attacks against CNN-based image forgery detectors

With the ubiquitous diffusion of social networks, images are becoming a ...
research
04/25/2021

Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors

Visually realistic GAN-generated images have recently emerged as an impo...
research
05/06/2018

A Counter-Forensic Method for CNN-Based Camera Model Identification

An increasing number of digital images are being shared and accessed thr...
research
08/30/2022

A Black-Box Attack on Optical Character Recognition Systems

Adversarial machine learning is an emerging area showing the vulnerabili...
research
05/31/2018

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

Adversarial attacks involve adding, small, often imperceptible, perturba...
research
03/30/2023

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

In this work, we present a data poisoning attack that confounds machine ...

Please sign up or login with your details

Forgot password? Click here to reset