Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization

07/06/2021
by   Long Zhuo, et al.
8

Image editing techniques enable people to modify the content of an image without leaving visual traces and thus may cause serious security risks. Hence the detection and localization of these forgeries become quite necessary and challenging. Furthermore, unlike other tasks with extensive data, there is usually a lack of annotated forged images for training due to annotation difficulties. In this paper, we propose a self-adversarial training strategy and a reliable coarse-to-fine network that utilizes a self-attention mechanism to localize forged regions in forgery images. The self-attention module is based on a Channel-Wise High Pass Filter block (CW-HPF). CW-HPF leverages inter-channel relationships of features and extracts noise features by high pass filters. Based on the CW-HPF, a self-attention mechanism, called forgery attention, is proposed to capture rich contextual dependencies of intrinsic inconsistency extracted from tampered regions. Specifically, we append two types of attention modules on top of CW-HPF respectively to model internal interdependencies in spatial dimension and external dependencies among channels. We exploit a coarse-to-fine network to enhance the noise inconsistency between original and tampered regions. More importantly, to address the issue of insufficient training data, we design a self-adversarial training strategy that expands training data dynamically to achieve more robust performance. Specifically, in each training iteration, we perform adversarial attacks against our network to generate adversarial examples and train our model on them. Extensive experimental results demonstrate that our proposed algorithm steadily outperforms state-of-the-art methods by a clear margin in different benchmark datasets.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 10

research
07/21/2022

Towards Efficient Adversarial Training on Vision Transformers

Vision Transformer (ViT), as a powerful alternative to Convolutional Neu...
research
11/25/2022

Spatial-Temporal Attention Network for Open-Set Fine-Grained Image Recognition

Triggered by the success of transformers in various visual tasks, the sp...
research
06/27/2023

No-Service Rail Surface Defect Segmentation via Normalized Attention and Dual-scale Interaction

No-service rail surface defect (NRSD) segmentation is an essential way f...
research
06/16/2020

Unsupervised Pansharpening Based on Self-Attention Mechanism

Pansharpening is to fuse a multispectral image (MSI) of low-spatial-reso...
research
09/06/2021

Class Semantics-based Attention for Action Detection

Action localization networks are often structured as a feature encoder s...
research
09/23/2018

Self Attention Grid for Person Re-Identification

In this paper, we present an attention mechanism scheme to improve perso...
research
08/09/2021

TransForensics: Image Forgery Localization with Dense Self-Attention

Nowadays advanced image editing tools and technical skills produce tampe...

Please sign up or login with your details

Forgot password? Click here to reset