Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion

11/22/2022
by   Yuhui Wu, et al.
0

Infrared and visible image fusion plays a vital role in the field of computer vision. Previous approaches make efforts to design various fusion rules in the loss functions. However, these experimental designed fusion rules make the methods more and more complex. Besides, most of them only focus on boosting the visual effects, thus showing unsatisfactory performance for the follow-up high-level vision tasks. To address these challenges, in this letter, we develop a semantic-level fusion network to sufficiently utilize the semantic guidance, emancipating the experimental designed fusion rules. In addition, to achieve a better semantic understanding of the feature fusion process, a fusion block based on the transformer is presented in a multi-scale manner. Moreover, we devise a regularization loss function, together with a training strategy, to fully use semantic guidance from the high-level vision tasks. Compared with state-of-the-art methods, our method does not depend on the hand-crafted fusion loss function. Still, it achieves superior performance on visual quality along with the follow-up high-level vision tasks.

READ FULL TEXT

page 1

page 3

page 4

research
05/17/2019

AM-LFS: AutoML for Loss Function Search

Designing an effective loss function plays an important role in visual a...
research
07/29/2021

PPT Fusion: Pyramid Patch Transformerfor a Case Study in Image Fusion

The Transformer architecture has achieved rapiddevelopment in recent yea...
research
05/10/2023

FusionBooster: A Unified Image Fusion Boosting Paradigm

Numerous ideas have emerged for designing fusion rules in the image fusi...
research
07/27/2015

Unification of Fusion Theories, Rules, Filters, Image Fusion and Target Tracking Methods (UFT)

The author has pledged in various papers, conference or seminar presenta...
research
05/10/2019

Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses

We present Neural-Guided RANSAC (NG-RANSAC), an extension to the classic...
research
11/09/2022

Interactive Feature Embedding for Infrared and Visible Image Fusion

General deep learning-based methods for infrared and visible image fusio...
research
07/08/2023

VS-TransGRU: A Novel Transformer-GRU-based Framework Enhanced by Visual-Semantic Fusion for Egocentric Action Anticipation

Egocentric action anticipation is a challenging task that aims to make a...

Please sign up or login with your details

Forgot password? Click here to reset