-
Pyramid Network with Online Hard Example Mining for Accurate Left Atrium Segmentation
Accurately segmenting left atrium in MR volume can benefit the ablation ...
read it
-
PET/CT Radiomic Sequencer for Prediction of EGFR and KRAS Mutation Status in NSCLC Patients
The aim of this study was to develop radiomic models using PET/CT radiom...
read it
-
A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer via CT Images
As Deep Convolutional Neural Networks (DCNNs) have shown robust performa...
read it
-
Quantification of Local Metabolic Tumor Volume Changes by Registering Blended PET-CT Images for Prediction of Pathologic Tumor Response
Quantification of local metabolic tumor volume (MTV) chan-ges after Chem...
read it
-
FragNet: Writer Identification using Deep Fragment Networks
Writer identification based on a small amount of text is a challenging p...
read it
-
Human Recognition Using Face in Computed Tomography
With the mushrooming use of computed tomography (CT) images in clinical ...
read it
-
Spatial-And-Context aware (SpACe) "virtual biopsy" radiogenomic maps to target tumor mutational status on structural MRI
With growing emphasis on personalized cancer-therapies,radiogenomics has...
read it
Pyramid Focusing Network for mutation prediction and classification in CT images
Predicting the mutation status of genes in tumors is of great clinical significance. Recent studies have suggested that certain mutations may be noninvasively predicted by studying image features of the tumors from Computed Tomography (CT) data. Currently, this kind of image feature identification method mainly relies on manual processing to extract generalized image features alone or machine processing without considering the morphological differences of the tumor itself, which makes it difficult to achieve further breakthroughs. In this paper, we propose a pyramid focusing network (PFNet) for mutation prediction and classification based on CT images. Firstly, we use Space Pyramid Pooling to collect semantic cues in feature maps from multiple scales according to the observation that the shape and size of the tumors are varied.Secondly, we improve the loss function based on the consideration that the features required for proper mutation detection are often not obvious in cross-sections of tumor edges, which raises more attention to these hard examples in the network. Finally, we devise a training scheme based on data augmentation to enhance the generalization ability of networks. Extensively verified on clinical gastric CT datasets of 20 testing volumes with 63648 CT images, our method achieves the accuracy of 94.90 status of at the CT image.
READ FULL TEXT
Comments
There are no comments yet.