The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

01/26/2021
by   Xinwei Zhao, et al.
0

In recent years, convolutional neural networks (CNNs) have been widely used by researchers to perform forensic tasks such as image tampering detection. At the same time, adversarial attacks have been developed that are capable of fooling CNN-based classifiers. Understanding the transferability of adversarial attacks, i.e. an attacks ability to attack a different CNN than the one it was trained against, has important implications for designing CNNs that are resistant to attacks. While attacks on object recognition CNNs are believed to be transferrable, recent work by Barni et al. has shown that attacks on forensic CNNs have difficulty transferring to other CNN architectures or CNNs trained using different datasets. In this paper, we demonstrate that adversarial attacks on forensic CNNs are even less transferrable than previously thought even between virtually identical CNN architectures! We show that several common adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions (i.e. the same CNN architectures trained using the same data). We note that all formulations of class definitions contain the unaltered class. This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2022

Transferability of Adversarial Attacks on Synthetic Speech Detection

Synthetic speech detection is one of the most important research problem...
research
04/27/2020

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...
research
08/25/2018

Analysis of adversarial attacks against CNN-based image forgery detectors

With the ubiquitous diffusion of social networks, images are becoming a ...
research
12/21/2020

Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

Convolutional neural networks (CNN) have been more and more applied in m...
research
09/06/2022

Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer

As convolutional neural networks (CNNs) become more accurate at object r...
research
04/27/2020

Transferable Perturbations of Deep Feature Distributions

Almost all current adversarial attacks of CNN classifiers rely on inform...
research
09/17/2020

Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks

We present Vax-a-Net; a technique for immunizing convolutional neural ne...

Please sign up or login with your details

Forgot password? Click here to reset