Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer

10/05/2018
by   Ashnil Kumar, et al.
6

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. However, current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis, e.g. region detection. We evaluated our CNN on a region detection problem using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image analysis (pre-fused inputs, multi-branch techniques, multi-channel techniques) and demonstrated that our approach had a significantly higher accuracy (p < 0.05) than the baselines.

READ FULL TEXT

page 3

page 4

page 5

page 7

page 8

page 12

page 13

page 14

research
06/10/2022

Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT

Single-photon emission computed tomography (SPECT) is a widely applied i...
research
10/28/2022

Hyper-Connected Transformer Network for Co-Learning Multi-Modality PET-CT Features

[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed t...
research
08/09/2023

Classification of lung cancer subtypes on CT images with synthetic pathological priors

The accurate diagnosis on pathological subtypes for lung cancer is of si...
research
07/12/2020

Multi-Modality Information Fusion for Radiomics-based Neural Architecture Search

'Radiomics' is a method that extracts mineable quantitative features fro...
research
06/10/2011

Omni-tomography/Multi-tomography -- Integrating Multiple Modalities for Simultaneous Imaging

Current tomographic imaging systems need major improvements, especially ...
research
01/11/2022

COROLLA: An Efficient Multi-Modality Fusion Framework with Supervised Contrastive Learning for Glaucoma Grading

Glaucoma is one of the ophthalmic diseases that may cause blindness, for...
research
01/22/2022

Modality Bank: Learn multi-modality images across data centers without sharing medical data

Multi-modality images have been widely used and provide comprehensive in...

Please sign up or login with your details

Forgot password? Click here to reset