Visualizing intestines for diagnostic assistance of ileus based on intestinal region segmentation from 3D CT images

by   Hirohisa Oda, et al.

This paper presents a visualization method of intestine (the small and large intestines) regions and their stenosed parts caused by ileus from CT volumes. Since it is difficult for non-expert clinicians to find stenosed parts, the intestine and its stenosed parts should be visualized intuitively. Furthermore, the intestine regions of ileus cases are quite hard to be segmented. The proposed method segments intestine regions by 3D FCN (3D U-Net). Intestine regions are quite difficult to be segmented in ileus cases since the inside the intestine is filled with fluids. These fluids have similar intensities with intestinal wall on 3D CT volumes. We segment the intestine regions by using 3D U-Net trained by a weak annotation approach. Weak-annotation makes possible to train the 3D U-Net with small manually-traced label images of the intestine. This avoids us to prepare many annotation labels of the intestine that has long and winding shape. Each intestine segment is volume-rendered and colored based on the distance from its endpoint in volume rendering. Stenosed parts (disjoint points of an intestine segment) can be easily identified on such visualization. In the experiments, we showed that stenosed parts were intuitively visualized as endpoints of segmented regions, which are colored by red or blue.



There are no comments yet.


page 5

page 6

page 7


Lung infection and normal region segmentation from CT volumes of COVID-19 cases

This paper proposes an automated segmentation method of infection and no...

Semi-automated Virtual Unfolded View Generation Method of Stomach from CT Volumes

CT image-based diagnosis of the stomach is developed as a new way of dia...

Deep learning to estimate the physical proportion of infected region of lung for COVID-19 pneumonia with CT image set

Utilizing computed tomography (CT) images to quickly estimate the severi...

Automatic Liver and Tumor Segmentation of CT and MRI Volumes using Cascaded Fully Convolutional Neural Networks

Automatic segmentation of the liver and hepatic lesions is an important ...

COVID-19 Infection Segmentation from Chest CT Images Based on Scale Uncertainty

This paper proposes a segmentation method of infection regions in the lu...

Atlas-Based Segmentation of Intracochlear Anatomy in Metal Artifact Affected CT Images of the Ear with Co-trained Deep Neural Networks

We propose an atlas-based method to segment the intracochlear anatomy (I...

CCS-GAN: COVID-19 CT-scan classification with very few positive training images

We present a novel algorithm that is able to classify COVID-19 pneumonia...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we propose a visualization method of the intestine (the small and large intestines) and their stenosed parts caused by ileus on CT volumes. On diagnosis of ileus, clinicians manually find obstruction or stenosed parts of the intestines by tracking the path of intestines on volumetric CT images (CT volumes). However, finding stenosed parts on CT volumes is difficult for non-expert clinicians since the small intestine is long and winding. Visualization of intestines and their stenosed parts assists clinicians to understand how the intestines are running and identify the stenosed parts.

There are various visualization methods of the large intestine (colon) for CT volumes. The virtual colonoscopy [5, 3] generates colonoscopy-like images. By exploring inside the large intestine as if using the colonoscopy, users can find lesions such as polyps. Virtual unfolding of the large intestine [4, 11] also have been widely studied. Entire the large intestine can be observed all at one as the unfolded view. Since intensities of feces or liquid inside the large intestine are similar to their surrounding tissues, fecal tagging is desired for clear visualization of the colon. Patients orally administrate contrast agent to increase visibility of feces or fluid on CT volumes. Virtual cleansing of feces [7, 9] are for replacing fecal or fluid regions by intensities of the air. It makes the entire large intestine as if filled with the air.

However, fecal tagging is not possible for diagnosis of ileus, which is often required on emergency diagnosis. Furthermore, visualization methods mentioned above are only for the large intestine. The ileus often occur in the small intestine, which is much more long and have complicated shape than the large intesitine. Accurate and intuitive visualization method specific for ileus diagnosis is desired.

Intestine segmentation is a fundamental process to visualize the intestine region for diagnostic assistance of the ileus case. There are very few segmentation methods that can be applied to the small intestines. Zhang, et al. [12] segmented small intestines on contrast-enhanced CT angiography (CTA) scans. Most intestinal segmentation methods [2, 10] are applied only for the large intestines with assumption of fecal tagging by contrast agents. Virtual cleansing of feces [7, 9] mentioned in the previous paragraph is also assuming fecal tagging. In summary, compared to segmentation of large intestines with fecal tagging, segmentation for ileus patients are difficult due to 1) low contrast between fluid and intestinal wall and 2) long and winding shape of the small intestines.

Utilization of 3D FCNs, such as 3D U-Net [1], would be one of solutions for segmenting intestine regions of ileus cases from CT volumes. However, it is very difficult to prepare 3D manual traced labels of the intestine including not only the large intestine but also the small intestine. Alternatively, we introduce weak annotation approach using small sample data. Training is possible by manually traced data of the intestine region the intestine only on several axial slices. Note that this work is not semi-automated segmentation with sparse annotation [1, 8], which produces entire segmentation results from volumetric images that partly contains manual tracing. Our proposed method does not require any manual tracing on testing data.

3D U-Net generally produces some false positive (FP) regions. We implement manual selection interface to select intestine segments (connected components) from 3D U-Net output for eliminating those FP regions. We select points inside intestinal regions for eliminating FP regions. Endpoints of an intestinal segment are then identified by utilizing constrained distance transformation. Each segment is colored based on distance from an endpoint to assist a clinician easily to understand running status of an intestinal segment. Also, this makes easy to find stenosed parts, which are disjoint of intestinal segments.

2 Methods

2.1 Overview

The proposed method performs visualization of stenosed parts by segmentation of intestines by 1) intestinal segmentation and 2) endpoint detection of intestinal segments. Training of the 3D U-Net is required in prior to inference. We assume that CT volumes for both training and inference are of ileus patients. In the following procedure, we utilize isotropic volumetric images that can be obtained by interpolation process of original CT volumes.

Weak annotations are manually created on several (7 in our experiments) axial slices of each CT volume in the training dataset. Those axial slices are randomly selected from CT volumes by a computer scientest who knows ileus diagnosis well.

2.2 Training

Our network structure is designed based on the 3D U-Net[1] as illustrated in Fig. 1(a). On behalf of implementing skip connections as concatenation of features, we use summation [6]. This allows us to reduce the number of parameters to be trained (green arrows in Fig. 1(a)) compared to concatenation.

In training, we fix a patch size as . Since patch size along

-axis is small (16 voxels), padding techniques inserting fixed values (e.g. zero-padding) affect central part of feature maps after several times of convolutions. Therefore, padding around boundaries of feature maps is performed by refraction before each convolutional layers to keep the feature map size. Size of network output is the same as the input patch. The output represent probabilities that each point is outside (near from 0) or inside (near from 1) the intestines.

As illustrated in Fig. 1(b), patches are cropped so that they contain manually-traced ground-truth as

slice of each patch. We compute the loss function only at the

slice of a patch volume. For more robust training using small amount of dataset, we apply non-rigid deformation and random rotation for each slice of patch volumes. For non-rigid deformation, we define a -grid on a patch. Each grid point randomly moves pixels for each of - and -axes. Random rotation is performed for degrees in maximum.

2.3 Intestinal segmentation

Intestinal regions are segmented by using the trained 3D U-Net by weak annotation approach. Only several slices on each CT volumes in the training dataset should have manually-traced labels. The sizes of interpolated volumes along with - and -axes are adjusted to become the next highest multiple of 32 to fit to the FCN by padding.

Patch volume is cropped from the interpolated input volume and then is fed to the 3D U-Net to obtain segmentation results (Fig. 1 (b)). We use middle slice (

slice) of inference results of the patch volume. We obtain inference results for all of slices of the interpolated volume by cropping a patch volume in the sliding window manner in one voxel stride to

-axis direction. When we input one patch volume to the FCN, we use only the inference result on the middle slice () slice. This is because the FCN training is performed by the loss function computed only at the middle slice of the patch volume. By iterating this process, we obtain segmentation results for whole volume.

Trained FCN can take any sizes of patch volume of unless exceedance of GPU memory limit is caused. Since FCN consists of convolution and the training process learns convolution kernel parameters, we can implement FCN that can change input patch volume size in the inference process.

2.4 Visulizing stenosed parts as endpoints of intestinal segments

3D U-Net still produces some FP regions that needs to be removed. Wrong connections between neighboring intestinal regions are also produced in segmentation results. To remove such regions, firstly morphological opening operation is performed to 3D U-Net outputs for eliminating small connected component and departing adjacent intestine regions.

As the concept illustrated in Fig. 2, we developed a manual selection interface of intestinal segment. User just need to click somewhere inside the intestine (no need to click either end of the intestinal segment). The intestinal segment including the clicked point is obtained. Screenshots are shown in Fig. 3

Figure 2: Concept of our manual selection interface. By clicking one of segmetation results inside the intestines, the result is colored for visualizing intestinal segment. One of endpoints becomes blue and the other becomes red. If subsequent intestinal segment exists, users can click again it. Clicking for several times allows users to reach stenosed part.
Figure 3: Screenshots of manual selection interface. (a) Before click: Network responses are shown as semitransparent heatmap. Users click inside intestines (high responses of network). (b) After click. Clicked intestinal segment is shown clearly. One endpoint is blue and another is red.

2.5 3D visualization

We visualize intestinal segments by using 3D volume rendering. Each segment is colored based on the distance from an endpoint along the running path of each intestinal segment. Constrained distance transformation is utilized to find endpoints of an intestinal segment. One point is selected in an intestinal segment and then constrained distance transformation is performed again from the selected point. We consider a voxel that has the maximum distance value as the one of endpoints. Then we perform a constrained distance transformation from an endpoint computed in the previous step. The voxel that has the maximum distance value is considered as another endpoint. An intestinal segment is colored by a distance from the endpoint firstly computed in volume rendering of intestinal segments.

Figure 4: Case C whose stenosed part was successfully visualized.

3 Experiments

Seven CT volumes of ileus patients were allowed to use under IRB approval of Aichi Medical University (Aichi, Japan). They consists of voxels with resolution . Seven axial slices had manually-traced labels of the intestine contents, which were checked by a pediatric surgeon. Stenosed parts are also found by the surgeon on all CT volumes. Those CT volumes were separated into three groups (three groups consist of 2, 2 and 3 CT volumes, respectively) for cross-validation.

Mini-batch size was set as four due to memory limitation (24GB) of the Quadro P6000 (NVIDIA) GPU. Training was continued for 20000 iterations using Adam optimizer with initial training rate

. The network was implemented using the Keras. Segmentation accuracies are quantitatively calculated on axial slices having manually-traced labels by 1) Dice score for just after thresholding the generated probabilities and 2) average number of connections between neighboring intestinal segments. Other parameters are set as follows:

, =10 pixels and degrees.

4 Results

Quantitative evaluation results for all cases are shown in Table 1. Mean Dice score among 7 cases was 0.75, representing accurate segmentation was performed for almost all cases.

Figure 4 shows results of Case C. A long intestinal segment was segmented and visualized very well under high Dice score (0.84). In contrast, Case A shown in Fig. 5 visualized only a very short intestinal segment. Dice score of Case A was 0.53, which is the worst case among the 7 cases.

5 Discussions

Case C shown in Fig. 4 has relatively clearer contrast than other cases. Network responses were high in almost the entire intestine regions. Although wrongly produced high responses were produced inside the lungs (Fig. 4 (b)), these FPs were successfully removed from final visualization results (Fig. 4(c)) thanks to our manual selection interface illustrated in Fig. 2. The stenosed part caused by ileus was successfully visualized as one of endpoints colored in blue. Correct coloring was done since intestinal segments were properly segmented without connections.

Case A shown in Fig. 5 failed to reach the stenosed part due to false negatives of 3D U-Net. Intestine walls have relatively low contrast between intestine walls and surrounding tissues than other cases. More robust network structure to low contrast images than current one should be considered in the future.

Figure 5: Results of Case A that segmentation was not performed until stenosed part.
Case Cause of ileus Dice # connections
A Obturator hernia 0.53 0.14
B Colon cancer 0.67 1.00
C Postoperative adhesion (suspicion) 0.84 0.29
D Unknown 0.86 0.00
E Paralysis (suspicion) 0.88 0.29
F Sigmoidal volvulus 0.58 0.33
G Strangulate 0.88 0.00
Mean 0.75 0.29
Table 1: Quantitative evaluation: Dice score (higher is better) and mean number of connections (lower is better).

6 Conclusions

We proposed a visualization method for assistance of diagnosing ileus. Segmentation of intestinal segments is based on training with weak annotation approach. Manual selection interface of intestinal segment removes false positives. Furthermore, the click-based interface well received by clinicians which assist them to reach stenosed parts intuitively, easily and quickly. Intestinal segmentation was successfully done using 3D U-Net with weak annotation approach. Either end of an intestinal segment is visualized by our coloring scheme. Future work includes tuning network structure for intestine regions. Also, it is required to find a way of quantitative evaluation of stenosed part detection.


  • [1] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger (2016) 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pp. 424–432. Cited by: §1, §2.2.
  • [2] M. Franaszek, R. M. Summers, P. J. Pickhardt, and J. R. Choi (2006) Hybrid segmentation of colon filled with air and opacified fluid for CT colonography. IEEE TMI 25 (3), pp. 358–368. Cited by: §1.
  • [3] D. Ganeshan, K. M. Elsayes, and D. Vining (2013) Virtual colonoscopy: utility, impact and overview. World journal of radiology 5 (3), pp. 61. Cited by: §1.
  • [4] S. Halier, S. Angenent, A. Tannenbaurn, and R. Kikinis (2000) Nondistorting flattening maps and the 3-D visualization of colon CT images. IEEE Transactions on Medical Imaging 19 (7), pp. 665–670. Cited by: §1.
  • [5] P. Lefere and S. Gryspeerdt (2010) Virtual colonoscopy. A Practical Guide/2006, X. Cited by: §1.
  • [6] H. Roth, M. Oda, N. Shimizu, H. Oda, Y. Hayashi, T. Kitasaka, M. Fujiwara, K. Misawa, and K. Mori (2018) Towards dense volumetric pancreas segmentation in ct using 3d fully convolutional networks. In Medical Imaging 2018: Image Processing, Vol. 10574, pp. 105740B. Cited by: §2.2.
  • [7] I. W. Serlie, F. M. Vos, R. Truyen, F. H. Post, J. Stoker, and L. J. Van Vliet (2010) Electronic cleansing for computed tomography (ct) colonography using a scale-invariant three-material model. IEEE transactions on biomedical engineering 57 (6), pp. 1306–1317. Cited by: §1, §1.
  • [8] T. Sugino, H. R. Roth, M. Oda, S. Omata, S. Sakuma, F. Arai, and K. Mori (2018) Automatic segmentation of eyeball structures from micro-CT images based on sparse annotation. In SPIE Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging, Vol. 10578, pp. 105780V. Cited by: §1.
  • [9] R. Tachibana, J. J. Näppi, J. Ota, N. Kohlhase, T. Hironaka, S. H. Kim, D. Regge, and H. Yoshida (2018) Deep learning electronic cleansing for single-and dual-energy CT colonography. RadioGraphics 38 (7), pp. 2034–2050. Cited by: §1, §1.
  • [10] X. Yang, X. Ye, and G. Slabaugh (2014) Multilabel region classification and semantic linking for colon segmentation in CT colonography. IEEE Transactions on Biomedical Engineering 62 (3), pp. 948–959. Cited by: §1.
  • [11] J. Yao, A. S. Chowdhury, J. Aman, and R. M. Summers (2010) Reversible projection technique for colon unfolding. IEEE Transactions on Biomedical engineering 57 (12), pp. 2861–2869. Cited by: §1.
  • [12] W. Zhang, J. Liu, J. Yao, A. Louie, T. B. Nguyen, S. Wank, W. L. Nowinski, and R. M. Summers (2013) Mesenteric vasculature-guided small bowel segmentation on 3-D CT. IEEE transactions on medical imaging 32 (11), pp. 2006–2021. Cited by: §1.