Multi-Modality Information Fusion for Radiomics-based Neural Architecture Search

07/12/2020
by   Yige Peng, et al.
0

'Radiomics' is a method that extracts mineable quantitative features from radiographic images. These features can then be used to determine prognosis, for example, predicting the development of distant metastases (DM). Existing radiomics methods, however, require complex manual effort including the design of hand-crafted radiomic features and their extraction and selection. Recent radiomics methods, based on convolutional neural networks (CNNs), also require manual input in network architecture design and hyper-parameter tuning. Radiomic complexity is further compounded when there are multiple imaging modalities, for example, combined positron emission tomography - computed tomography (PET-CT) where there is functional information from PET and complementary anatomical localization information from computed tomography (CT). Existing multi-modality radiomics methods manually fuse the data that are extracted separately. Reliance on manual fusion often results in sub-optimal fusion because they are dependent on an 'expert's' understanding of medical images. In this study, we propose a multi-modality neural architecture search method (MM-NAS) to automatically derive optimal multi-modality image features for radiomics and thus negate the dependence on a manual process. We evaluated our MM-NAS on the ability to predict DM using a public PET-CT dataset of patients with soft-tissue sarcomas (STSs). Our results show that our MM-NAS had a higher prediction accuracy when compared to state-of-the-art radiomics methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2021

Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans using Constrained Hierarchical Multi-Modality Feature Learning

Distant metastases (DM) refer to the dissemination of tumors, usually, b...
research
10/05/2018

Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer

The analysis of multi-modality positron emission tomography and computed...
research
10/28/2022

Hyper-Connected Transformer Network for Co-Learning Multi-Modality PET-CT Features

[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed t...
research
03/24/2021

MANAS: Multi-Scale and Multi-Level Neural Architecture Search for Low-Dose CT Denoising

Lowering the radiation dose in computed tomography (CT) can greatly redu...
research
02/03/2021

MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records

One important challenge of applying deep learning to electronic health r...
research
09/12/2023

Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices

The recent surge of interest surrounding Multimodal Neural Networks (MM-...

Please sign up or login with your details

Forgot password? Click here to reset