Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning

by   Zhishe Wang, et al.

The existing generative adversarial fusion methods generally concatenate source images and extract local features through convolution operation, without considering their global characteristics, which tends to produce an unbalanced result and is biased towards the infrared image or visible image. Toward this end, we propose a novel end-to-end mode based on generative adversarial training to achieve better fusion balance, termed as interactive compensatory attention fusion network (ICAFusion). In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and adopt infrared and visible paths to provide additional intensity and gradient information. Moreover, we develop interactive and compensatory attention modules to communicate their pathwise information, and model their long-range dependencies to generate attention maps, which can more focus on infrared target perception and visible detail characterization, and further increase the representation power for feature extraction and feature reconstruction. In addition, dual discriminators are designed to identify the similar distribution between fused result and source images, and the generator is optimized to produce a more balanced result. Extensive experiments illustrate that our ICAFusion obtains superior fusion performance and better generalization ability, which precedes other advanced methods in the subjective visual description and objective metric evaluation. Our codes will be public at <>


page 2

page 3

page 4

page 7

page 8

page 10

page 11

page 13


An Attention-Guided and Wavelet-Constrained Generative Adversarial Network for Infrared and Visible Image Fusion

The GAN-based infrared and visible image fusion methods have gained ever...

TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network

The end-to-end image fusion framework has achieved promising performance...

A Dual-branch Network for Infrared and Visible Image Fusion

Deep learning is a rapidly developing approach in the field of infrared ...

SwinFuse: A Residual Swin Transformer Fusion Network for Infrared and Visible Images

The existing deep learning fusion methods mainly concentrate on the conv...

Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image Fusion with Diffusion Models

Color plays an important role in human visual perception, reflecting the...

CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion

Infrared and visible image fusion targets to provide an informative imag...

A Joint Convolution Auto-encoder Network for Infrared and Visible Image Fusion

Background: Leaning redundant and complementary relationships is a criti...

Please sign up or login with your details

Forgot password? Click here to reset