CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion

11/20/2022
by   Jinyuan Liu, et al.
0

Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors. Existing learning-based fusion approaches attempt to construct various loss functions to preserve complementary features from both modalities, while neglecting to discover the inter-relationship between the two modalities, leading to redundant or even invalid information on the fusion results. To alleviate these issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end manner. Concretely, to simultaneously retain typical features from both modalities and remove unwanted information emerging on the fused result, we develop a coupled contrastive constraint in our loss function.In a fused imge, its foreground target/background detail part is pulled close to the infrared/visible source and pushed far away from the visible/infrared source in the representation space. We further exploit image characteristics to provide data-sensitive weights, which allows our loss function to build a more reliable relationship with source images. Furthermore, to learn rich hierarchical feature representation and comprehensively transfer features in the fusion process, a multi-level attention module is established. In addition, we also apply the proposed CoCoNet on medical image fusion of different types, e.g., magnetic resonance image and positron emission tomography image, magnetic resonance image and single photon emission computed tomography image. Extensive experiments demonstrate that our method achieves the state-of-the-art (SOTA) performance under both subjective and objective evaluation, especially in preserving prominent targets and recovering vital textural details.

READ FULL TEXT

page 7

page 10

page 11

page 13

page 16

page 17

page 18

page 20

research
01/26/2022

A Joint Convolution Auto-encoder Network for Infrared and Visible Image Fusion

Background: Leaning redundant and complementary relationships is a criti...
research
05/19/2023

Equivariant Multi-Modality Image Fusion

Multi-modality image fusion is a technique used to combine information f...
research
03/07/2021

RFN-Nest: An end-to-end residual fusion network for infrared and visible images

In the image fusion field, the design of deep learning-based fusion meth...
research
03/29/2022

Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning

The existing generative adversarial fusion methods generally concatenate...
research
11/26/2022

CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion

Multi-modality (MM) image fusion aims to render fused images that mainta...
research
11/09/2022

Interactive Feature Embedding for Infrared and Visible Image Fusion

General deep learning-based methods for infrared and visible image fusio...
research
03/30/2022

Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection

This study addresses the issue of fusing infrared and visible images tha...

Please sign up or login with your details

Forgot password? Click here to reset